Test Report: Docker_Linux_containerd_arm64 22128

                    
                      2cb2c94398211ca18cf7c1877ff6bae2d6b3d16e:2025-12-13:42756
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 500.37
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.69
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.29
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.29
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.39
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 736.31
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.27
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.05
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.68
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 2.99
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.37
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.7
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.4
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.17
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 97.72
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.05
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.26
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.27
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.27
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.26
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.27
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.55
358 TestKubernetesUpgrade 802.91
415 TestStartStop/group/no-preload/serial/FirstStart 509.09
437 TestStartStop/group/newest-cni/serial/FirstStart 502.16
438 TestStartStop/group/no-preload/serial/DeployApp 2.91
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 106.91
442 TestStartStop/group/no-preload/serial/SecondStart 370
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 101.24
447 TestStartStop/group/newest-cni/serial/SecondStart 373.08
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.35
452 TestStartStop/group/newest-cni/serial/Pause 9.79
467 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 269.3
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (500.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 08:39:58.313259    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:42:14.443347    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:42:42.155265    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:51.888704    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:51.895000    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:51.906346    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:51.927795    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:51.969160    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:52.050526    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:52.211939    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:52.533610    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:53.175656    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:54.457285    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:57.019661    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:02.141004    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:12.382285    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:32.863711    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:45:13.825147    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:46:35.747718    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:47:14.443169    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m18.889832535s)

                                                
                                                
-- stdout --
	* [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:33459
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:33459 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-074420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-074420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001119929s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001208507s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001208507s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	        "Created": "2025-12-13T08:39:40.050933605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:39:40.114566965Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
	        "Name": "/functional-074420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	                "LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-074420",
	                "Source": "/var/lib/docker/volumes/functional-074420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074420",
	                "name.minikube.sigs.k8s.io": "functional-074420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
	            "SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e0:c5:f8:aa:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
	                    "EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074420",
	                        "662fb3d52b3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 6 (313.348127ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 08:47:54.172969   47493 status.go:458] kubeconfig endpoint: get endpoint: "functional-074420" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdspecific-port3636403034/001:/mount-9p --alsologtostderr -v=1 --port 46464                       │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ ssh            │ functional-049633 ssh findmnt -T /mount-9p | grep 9p                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ ssh            │ functional-049633 ssh findmnt -T /mount-9p | grep 9p                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh -- ls -la /mount-9p                                                                                                               │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh sudo umount -f /mount-9p                                                                                                          │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ mount          │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount1 --alsologtostderr -v=1                                      │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ mount          │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount3 --alsologtostderr -v=1                                      │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ mount          │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount2 --alsologtostderr -v=1                                      │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ ssh            │ functional-049633 ssh findmnt -T /mount1                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ update-context │ functional-049633 update-context --alsologtostderr -v=2                                                                                                 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ update-context │ functional-049633 update-context --alsologtostderr -v=2                                                                                                 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ update-context │ functional-049633 update-context --alsologtostderr -v=2                                                                                                 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image ls --format short --alsologtostderr                                                                                             │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh findmnt -T /mount1                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh pgrep buildkitd                                                                                                                   │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ ssh            │ functional-049633 ssh findmnt -T /mount2                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh findmnt -T /mount3                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image build -t localhost/my-image:functional-049633 testdata/build --alsologtostderr                                                  │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ mount          │ -p functional-049633 --kill=true                                                                                                                        │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ image          │ functional-049633 image ls --format yaml --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image ls --format json --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image ls --format table --alsologtostderr                                                                                             │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image ls                                                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ delete         │ -p functional-049633                                                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ start          │ -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:39:35
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:39:35.008642   42018 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:39:35.008789   42018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:35.008793   42018 out.go:374] Setting ErrFile to fd 2...
	I1213 08:39:35.008798   42018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:35.009106   42018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:39:35.009625   42018 out.go:368] Setting JSON to false
	I1213 08:39:35.010541   42018 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1327,"bootTime":1765613848,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:39:35.010614   42018 start.go:143] virtualization:  
	I1213 08:39:35.015059   42018 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:39:35.019753   42018 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:39:35.019904   42018 notify.go:221] Checking for updates...
	I1213 08:39:35.026824   42018 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:39:35.030087   42018 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:39:35.033754   42018 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:39:35.036831   42018 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 08:39:35.039923   42018 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:39:35.043137   42018 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:39:35.071631   42018 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:39:35.071745   42018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:39:35.131799   42018 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 08:39:35.121544673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:39:35.131895   42018 docker.go:319] overlay module found
	I1213 08:39:35.135130   42018 out.go:179] * Using the docker driver based on user configuration
	I1213 08:39:35.137936   42018 start.go:309] selected driver: docker
	I1213 08:39:35.137945   42018 start.go:927] validating driver "docker" against <nil>
	I1213 08:39:35.137971   42018 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:39:35.138689   42018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:39:35.193292   42018 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 08:39:35.184152358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:39:35.193431   42018 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:39:35.193663   42018 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 08:39:35.196727   42018 out.go:179] * Using Docker driver with root privileges
	I1213 08:39:35.199650   42018 cni.go:84] Creating CNI manager for ""
	I1213 08:39:35.199701   42018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:39:35.199708   42018 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 08:39:35.199792   42018 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:39:35.202870   42018 out.go:179] * Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	I1213 08:39:35.205757   42018 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 08:39:35.208764   42018 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:39:35.211618   42018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:39:35.211683   42018 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 08:39:35.211692   42018 cache.go:65] Caching tarball of preloaded images
	I1213 08:39:35.211690   42018 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:39:35.211782   42018 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 08:39:35.211791   42018 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 08:39:35.212122   42018 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
	I1213 08:39:35.212141   42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json: {Name:mk487183f82ca2b9ae9675e1dbf064ee3afe4870 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:39:35.231325   42018 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:39:35.231337   42018 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:39:35.231358   42018 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:39:35.231387   42018 start.go:360] acquireMachinesLock for functional-074420: {Name:mk9a8356bf81e58530d2c2996b4da0b7487171c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:39:35.231504   42018 start.go:364] duration metric: took 103.008µs to acquireMachinesLock for "functional-074420"
	I1213 08:39:35.231560   42018 start.go:93] Provisioning new machine with config: &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 08:39:35.231639   42018 start.go:125] createHost starting for "" (driver="docker")
	I1213 08:39:35.234997   42018 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1213 08:39:35.235294   42018 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:33459 to docker env.
	I1213 08:39:35.235318   42018 start.go:159] libmachine.API.Create for "functional-074420" (driver="docker")
	I1213 08:39:35.235340   42018 client.go:173] LocalClient.Create starting
	I1213 08:39:35.235401   42018 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem
	I1213 08:39:35.235441   42018 main.go:143] libmachine: Decoding PEM data...
	I1213 08:39:35.235458   42018 main.go:143] libmachine: Parsing certificate...
	I1213 08:39:35.235539   42018 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem
	I1213 08:39:35.235572   42018 main.go:143] libmachine: Decoding PEM data...
	I1213 08:39:35.235583   42018 main.go:143] libmachine: Parsing certificate...
	I1213 08:39:35.235970   42018 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 08:39:35.252270   42018 cli_runner.go:211] docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 08:39:35.252351   42018 network_create.go:284] running [docker network inspect functional-074420] to gather additional debugging logs...
	I1213 08:39:35.252368   42018 cli_runner.go:164] Run: docker network inspect functional-074420
	W1213 08:39:35.267395   42018 cli_runner.go:211] docker network inspect functional-074420 returned with exit code 1
	I1213 08:39:35.267433   42018 network_create.go:287] error running [docker network inspect functional-074420]: docker network inspect functional-074420: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-074420 not found
	I1213 08:39:35.267445   42018 network_create.go:289] output of [docker network inspect functional-074420]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-074420 not found
	
	** /stderr **
	I1213 08:39:35.267556   42018 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:39:35.284421   42018 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400191fd10}
	I1213 08:39:35.284449   42018 network_create.go:124] attempt to create docker network functional-074420 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 08:39:35.284498   42018 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-074420 functional-074420
	I1213 08:39:35.350529   42018 network_create.go:108] docker network functional-074420 192.168.49.0/24 created
	I1213 08:39:35.350550   42018 kic.go:121] calculated static IP "192.168.49.2" for the "functional-074420" container
	I1213 08:39:35.350622   42018 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 08:39:35.365623   42018 cli_runner.go:164] Run: docker volume create functional-074420 --label name.minikube.sigs.k8s.io=functional-074420 --label created_by.minikube.sigs.k8s.io=true
	I1213 08:39:35.390202   42018 oci.go:103] Successfully created a docker volume functional-074420
	I1213 08:39:35.390282   42018 cli_runner.go:164] Run: docker run --rm --name functional-074420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-074420 --entrypoint /usr/bin/test -v functional-074420:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 08:39:35.929253   42018 oci.go:107] Successfully prepared a docker volume functional-074420
	I1213 08:39:35.929315   42018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:39:35.929323   42018 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 08:39:35.929391   42018 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-074420:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 08:39:39.978855   42018 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-074420:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.049431633s)
	I1213 08:39:39.978882   42018 kic.go:203] duration metric: took 4.049555975s to extract preloaded images to volume ...
	W1213 08:39:39.979029   42018 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 08:39:39.979126   42018 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 08:39:40.036604   42018 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-074420 --name functional-074420 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-074420 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-074420 --network functional-074420 --ip 192.168.49.2 --volume functional-074420:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 08:39:40.331411   42018 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Running}}
	I1213 08:39:40.354871   42018 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:39:40.379568   42018 cli_runner.go:164] Run: docker exec functional-074420 stat /var/lib/dpkg/alternatives/iptables
	I1213 08:39:40.427228   42018 oci.go:144] the created container "functional-074420" has a running status.
	I1213 08:39:40.427248   42018 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa...
	I1213 08:39:40.509985   42018 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 08:39:40.532303   42018 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:39:40.558663   42018 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 08:39:40.558674   42018 kic_runner.go:114] Args: [docker exec --privileged functional-074420 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 08:39:40.612847   42018 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:39:40.640034   42018 machine.go:94] provisionDockerMachine start ...
	I1213 08:39:40.640113   42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:39:40.667930   42018 main.go:143] libmachine: Using SSH client type: native
	I1213 08:39:40.668249   42018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:39:40.668256   42018 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:39:40.668891   42018 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53326->127.0.0.1:32788: read: connection reset by peer
	I1213 08:39:43.818892   42018 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:39:43.818906   42018 ubuntu.go:182] provisioning hostname "functional-074420"
	I1213 08:39:43.818965   42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:39:43.836575   42018 main.go:143] libmachine: Using SSH client type: native
	I1213 08:39:43.836879   42018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:39:43.836887   42018 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-074420 && echo "functional-074420" | sudo tee /etc/hostname
	I1213 08:39:43.996329   42018 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:39:43.996404   42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:39:44.017128   42018 main.go:143] libmachine: Using SSH client type: native
	I1213 08:39:44.017431   42018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:39:44.017445   42018 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-074420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-074420/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-074420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:39:44.168395   42018 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:39:44.168410   42018 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 08:39:44.168430   42018 ubuntu.go:190] setting up certificates
	I1213 08:39:44.168439   42018 provision.go:84] configureAuth start
	I1213 08:39:44.168498   42018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:39:44.190620   42018 provision.go:143] copyHostCerts
	I1213 08:39:44.190681   42018 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 08:39:44.190689   42018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:39:44.190766   42018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 08:39:44.190863   42018 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 08:39:44.190867   42018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:39:44.190893   42018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 08:39:44.190947   42018 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 08:39:44.190951   42018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:39:44.190973   42018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 08:39:44.191065   42018 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.functional-074420 san=[127.0.0.1 192.168.49.2 functional-074420 localhost minikube]
	I1213 08:39:44.560397   42018 provision.go:177] copyRemoteCerts
	I1213 08:39:44.560447   42018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:39:44.560486   42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:39:44.577528   42018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:39:44.683250   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 08:39:44.701053   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:39:44.718804   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:39:44.735986   42018 provision.go:87] duration metric: took 567.525212ms to configureAuth
	I1213 08:39:44.736003   42018 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:39:44.736188   42018 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:39:44.736194   42018 machine.go:97] duration metric: took 4.096149491s to provisionDockerMachine
	I1213 08:39:44.736199   42018 client.go:176] duration metric: took 9.500854047s to LocalClient.Create
	I1213 08:39:44.736211   42018 start.go:167] duration metric: took 9.500893613s to libmachine.API.Create "functional-074420"
	I1213 08:39:44.736217   42018 start.go:293] postStartSetup for "functional-074420" (driver="docker")
	I1213 08:39:44.736227   42018 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:39:44.736272   42018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:39:44.736315   42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:39:44.752885   42018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:39:44.855450   42018 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:39:44.858746   42018 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:39:44.858763   42018 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:39:44.858773   42018 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 08:39:44.858826   42018 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 08:39:44.858913   42018 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 08:39:44.858998   42018 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1213 08:39:44.859038   42018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1213 08:39:44.866644   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:39:44.883478   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1213 08:39:44.900926   42018 start.go:296] duration metric: took 164.696307ms for postStartSetup
	I1213 08:39:44.901268   42018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:39:44.919358   42018 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
	I1213 08:39:44.919750   42018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:39:44.919803   42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:39:44.936164   42018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:39:45.038365   42018 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:39:45.060110   42018 start.go:128] duration metric: took 9.828441972s to createHost
	I1213 08:39:45.060130   42018 start.go:83] releasing machines lock for "functional-074420", held for 9.828617997s
	I1213 08:39:45.060227   42018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:39:45.103915   42018 out.go:179] * Found network options:
	I1213 08:39:45.111367   42018 out.go:179]   - HTTP_PROXY=localhost:33459
	W1213 08:39:45.119117   42018 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1213 08:39:45.124913   42018 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1213 08:39:45.128176   42018 ssh_runner.go:195] Run: cat /version.json
	I1213 08:39:45.128245   42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:39:45.129162   42018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:39:45.129227   42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:39:45.160597   42018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:39:45.162212   42018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:39:45.269246   42018 ssh_runner.go:195] Run: systemctl --version
	I1213 08:39:45.365813   42018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 08:39:45.371968   42018 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:39:45.372044   42018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:39:45.398970   42018 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 08:39:45.398982   42018 start.go:496] detecting cgroup driver to use...
	I1213 08:39:45.399023   42018 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:39:45.399069   42018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 08:39:45.414492   42018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:39:45.427187   42018 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:39:45.427249   42018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:39:45.444774   42018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:39:45.465107   42018 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:39:45.584345   42018 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:39:45.711901   42018 docker.go:234] disabling docker service ...
	I1213 08:39:45.711952   42018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:39:45.733252   42018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:39:45.746632   42018 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:39:45.862418   42018 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:39:45.989082   42018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:39:46.001863   42018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:39:46.020163   42018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:39:46.030149   42018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 08:39:46.043688   42018 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:39:46.043761   42018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:39:46.053238   42018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:39:46.061969   42018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:39:46.070460   42018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:39:46.079288   42018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:39:46.087279   42018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:39:46.095872   42018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:39:46.104667   42018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:39:46.113309   42018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:39:46.120534   42018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:39:46.127919   42018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:39:46.242734   42018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:39:46.375439   42018 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 08:39:46.375503   42018 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 08:39:46.379347   42018 start.go:564] Will wait 60s for crictl version
	I1213 08:39:46.379401   42018 ssh_runner.go:195] Run: which crictl
	I1213 08:39:46.382863   42018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:39:46.407695   42018 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 08:39:46.407775   42018 ssh_runner.go:195] Run: containerd --version
	I1213 08:39:46.429202   42018 ssh_runner.go:195] Run: containerd --version
	I1213 08:39:46.453205   42018 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 08:39:46.456067   42018 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:39:46.473296   42018 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 08:39:46.477132   42018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 08:39:46.486681   42018 kubeadm.go:884] updating cluster {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:39:46.486784   42018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:39:46.486847   42018 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:39:46.513672   42018 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:39:46.513684   42018 containerd.go:534] Images already preloaded, skipping extraction
	I1213 08:39:46.513756   42018 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:39:46.537482   42018 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:39:46.537493   42018 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:39:46.537499   42018 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 08:39:46.537607   42018 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-074420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:39:46.537669   42018 ssh_runner.go:195] Run: sudo crictl info
	I1213 08:39:46.561711   42018 cni.go:84] Creating CNI manager for ""
	I1213 08:39:46.561722   42018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:39:46.561735   42018 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:39:46.561756   42018 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-074420 NodeName:functional-074420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:39:46.561864   42018 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-074420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:39:46.561930   42018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:39:46.569762   42018 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:39:46.569819   42018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:39:46.577342   42018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 08:39:46.590349   42018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:39:46.603598   42018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 08:39:46.616603   42018 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:39:46.620266   42018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 08:39:46.630271   42018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:39:46.754305   42018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:39:46.770188   42018 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420 for IP: 192.168.49.2
	I1213 08:39:46.770198   42018 certs.go:195] generating shared ca certs ...
	I1213 08:39:46.770212   42018 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:39:46.770344   42018 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 08:39:46.770385   42018 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 08:39:46.770391   42018 certs.go:257] generating profile certs ...
	I1213 08:39:46.770451   42018 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key
	I1213 08:39:46.770460   42018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt with IP's: []
	I1213 08:39:47.026369   42018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt ...
	I1213 08:39:47.026394   42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: {Name:mkf94bf2e36ee2a82c3216cba6efa264a3df13aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:39:47.026604   42018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key ...
	I1213 08:39:47.026611   42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key: {Name:mkc6f3d57c62afe223b051632170572e08ab1587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:39:47.026707   42018 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068
	I1213 08:39:47.026720   42018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt.971c8068 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 08:39:47.158042   42018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt.971c8068 ...
	I1213 08:39:47.158057   42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt.971c8068: {Name:mkb27a52c7997e89ac0f18c5820641571e6e2856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:39:47.158249   42018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068 ...
	I1213 08:39:47.158259   42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068: {Name:mk951bc88000f094f69ff3a51f592a8492883138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:39:47.158344   42018 certs.go:382] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt.971c8068 -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt
	I1213 08:39:47.158420   42018 certs.go:386] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068 -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key
	I1213 08:39:47.158472   42018 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key
	I1213 08:39:47.158485   42018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt with IP's: []
	I1213 08:39:47.250575   42018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt ...
	I1213 08:39:47.250589   42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt: {Name:mka2c0137322e7e1ccf578821ae754fe9cb2d3a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:39:47.250769   42018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key ...
	I1213 08:39:47.250776   42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key: {Name:mkc71cca0e53de1bfc7eed430ccb4047ca2b0852 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:39:47.250966   42018 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 08:39:47.251005   42018 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 08:39:47.251013   42018 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:39:47.251041   42018 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 08:39:47.251064   42018 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:39:47.251087   42018 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 08:39:47.251133   42018 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:39:47.251784   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:39:47.270551   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:39:47.290740   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:39:47.310143   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 08:39:47.329665   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:39:47.347558   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:39:47.365525   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:39:47.383483   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:39:47.401153   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 08:39:47.419849   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 08:39:47.439488   42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:39:47.460458   42018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:39:47.473321   42018 ssh_runner.go:195] Run: openssl version
	I1213 08:39:47.479379   42018 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 08:39:47.486769   42018 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 08:39:47.494131   42018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 08:39:47.497903   42018 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:39:47.497957   42018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 08:39:47.539850   42018 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:39:47.547972   42018 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
	I1213 08:39:47.556051   42018 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:39:47.564230   42018 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:39:47.572240   42018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:39:47.576460   42018 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:39:47.576535   42018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:39:47.618080   42018 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:39:47.625373   42018 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 08:39:47.632911   42018 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 08:39:47.640340   42018 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 08:39:47.647879   42018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 08:39:47.651656   42018 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:39:47.651733   42018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 08:39:47.692835   42018 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:39:47.701070   42018 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
	I1213 08:39:47.708764   42018 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:39:47.712327   42018 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 08:39:47.712367   42018 kubeadm.go:401] StartCluster: {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:39:47.712434   42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 08:39:47.712485   42018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:39:47.738096   42018 cri.go:89] found id: ""
	I1213 08:39:47.738161   42018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:39:47.745866   42018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:39:47.753358   42018 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 08:39:47.753408   42018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:39:47.761299   42018 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 08:39:47.761308   42018 kubeadm.go:158] found existing configuration files:
	
	I1213 08:39:47.761364   42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 08:39:47.768974   42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 08:39:47.769027   42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 08:39:47.776504   42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 08:39:47.783908   42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 08:39:47.783959   42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:39:47.791205   42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 08:39:47.798664   42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 08:39:47.798730   42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:39:47.805928   42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 08:39:47.813474   42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 08:39:47.813528   42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:39:47.820866   42018 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 08:39:47.952729   42018 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 08:39:47.953143   42018 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 08:39:48.022518   42018 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 08:43:51.217302   42018 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 08:43:51.217323   42018 kubeadm.go:319] 
	I1213 08:43:51.217395   42018 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 08:43:51.221041   42018 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 08:43:51.221132   42018 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 08:43:51.221292   42018 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 08:43:51.221393   42018 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 08:43:51.221453   42018 kubeadm.go:319] OS: Linux
	I1213 08:43:51.221982   42018 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 08:43:51.222071   42018 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 08:43:51.222155   42018 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 08:43:51.222241   42018 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 08:43:51.222359   42018 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 08:43:51.222573   42018 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 08:43:51.222633   42018 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 08:43:51.222697   42018 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 08:43:51.222751   42018 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 08:43:51.222838   42018 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 08:43:51.222958   42018 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 08:43:51.223051   42018 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 08:43:51.223121   42018 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 08:43:51.225963   42018 out.go:252]   - Generating certificates and keys ...
	I1213 08:43:51.226053   42018 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 08:43:51.226132   42018 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 08:43:51.226223   42018 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 08:43:51.226292   42018 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 08:43:51.226358   42018 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 08:43:51.226412   42018 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 08:43:51.226473   42018 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 08:43:51.226597   42018 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-074420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 08:43:51.226648   42018 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 08:43:51.226789   42018 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-074420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 08:43:51.226853   42018 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 08:43:51.226917   42018 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 08:43:51.226962   42018 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 08:43:51.227034   42018 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 08:43:51.227085   42018 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 08:43:51.227141   42018 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 08:43:51.227199   42018 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 08:43:51.227272   42018 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 08:43:51.227333   42018 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 08:43:51.227430   42018 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 08:43:51.227504   42018 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 08:43:51.232281   42018 out.go:252]   - Booting up control plane ...
	I1213 08:43:51.232373   42018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 08:43:51.232481   42018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 08:43:51.232552   42018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 08:43:51.232655   42018 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 08:43:51.232766   42018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 08:43:51.232887   42018 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 08:43:51.232972   42018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 08:43:51.233010   42018 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 08:43:51.233146   42018 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 08:43:51.233256   42018 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 08:43:51.233328   42018 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001119929s
	I1213 08:43:51.233331   42018 kubeadm.go:319] 
	I1213 08:43:51.233391   42018 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 08:43:51.233438   42018 kubeadm.go:319] 	- The kubelet is not running
	I1213 08:43:51.233556   42018 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 08:43:51.233560   42018 kubeadm.go:319] 
	I1213 08:43:51.233663   42018 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 08:43:51.233694   42018 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 08:43:51.233738   42018 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 08:43:51.233801   42018 kubeadm.go:319] 
	W1213 08:43:51.233871   42018 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-074420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-074420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001119929s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 08:43:51.233960   42018 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 08:43:51.644766   42018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:43:51.658140   42018 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 08:43:51.658191   42018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:43:51.666791   42018 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 08:43:51.666810   42018 kubeadm.go:158] found existing configuration files:
	
	I1213 08:43:51.666861   42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 08:43:51.674638   42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 08:43:51.674701   42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 08:43:51.682186   42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 08:43:51.689889   42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 08:43:51.689948   42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:43:51.697487   42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 08:43:51.705045   42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 08:43:51.705100   42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:43:51.712252   42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 08:43:51.719870   42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 08:43:51.719935   42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:43:51.727359   42018 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 08:43:51.769563   42018 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 08:43:51.769610   42018 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 08:43:51.835643   42018 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 08:43:51.835708   42018 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 08:43:51.835745   42018 kubeadm.go:319] OS: Linux
	I1213 08:43:51.835789   42018 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 08:43:51.835836   42018 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 08:43:51.835882   42018 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 08:43:51.835929   42018 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 08:43:51.835980   42018 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 08:43:51.836027   42018 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 08:43:51.836073   42018 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 08:43:51.836119   42018 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 08:43:51.836164   42018 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 08:43:51.905250   42018 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 08:43:51.905390   42018 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 08:43:51.905493   42018 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 08:43:51.911999   42018 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 08:43:51.915619   42018 out.go:252]   - Generating certificates and keys ...
	I1213 08:43:51.915706   42018 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 08:43:51.915765   42018 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 08:43:51.915837   42018 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 08:43:51.915894   42018 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 08:43:51.915959   42018 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 08:43:51.916230   42018 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 08:43:51.916310   42018 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 08:43:51.916557   42018 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 08:43:51.916629   42018 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 08:43:51.916875   42018 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 08:43:51.917066   42018 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 08:43:51.917120   42018 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 08:43:52.072887   42018 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 08:43:52.306102   42018 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 08:43:52.396478   42018 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 08:43:52.909784   42018 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 08:43:53.263053   42018 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 08:43:53.263865   42018 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 08:43:53.266707   42018 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 08:43:53.270092   42018 out.go:252]   - Booting up control plane ...
	I1213 08:43:53.270214   42018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 08:43:53.270336   42018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 08:43:53.270436   42018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 08:43:53.294198   42018 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 08:43:53.294312   42018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 08:43:53.301591   42018 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 08:43:53.301843   42018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 08:43:53.301882   42018 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 08:43:53.435451   42018 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 08:43:53.435633   42018 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 08:47:53.436618   42018 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001208507s
	I1213 08:47:53.436637   42018 kubeadm.go:319] 
	I1213 08:47:53.436693   42018 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 08:47:53.436725   42018 kubeadm.go:319] 	- The kubelet is not running
	I1213 08:47:53.436829   42018 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 08:47:53.436832   42018 kubeadm.go:319] 
	I1213 08:47:53.436936   42018 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 08:47:53.436967   42018 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 08:47:53.436997   42018 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 08:47:53.437000   42018 kubeadm.go:319] 
	I1213 08:47:53.441253   42018 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 08:47:53.441671   42018 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 08:47:53.441782   42018 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 08:47:53.442020   42018 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 08:47:53.442028   42018 kubeadm.go:319] 
	I1213 08:47:53.442095   42018 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 08:47:53.442143   42018 kubeadm.go:403] duration metric: took 8m5.729777522s to StartCluster
	I1213 08:47:53.442187   42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:47:53.442246   42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:47:53.467027   42018 cri.go:89] found id: ""
	I1213 08:47:53.467050   42018 logs.go:282] 0 containers: []
	W1213 08:47:53.467057   42018 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:47:53.467062   42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:47:53.467124   42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:47:53.492575   42018 cri.go:89] found id: ""
	I1213 08:47:53.492588   42018 logs.go:282] 0 containers: []
	W1213 08:47:53.492599   42018 logs.go:284] No container was found matching "etcd"
	I1213 08:47:53.492603   42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:47:53.492661   42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:47:53.516560   42018 cri.go:89] found id: ""
	I1213 08:47:53.516574   42018 logs.go:282] 0 containers: []
	W1213 08:47:53.516580   42018 logs.go:284] No container was found matching "coredns"
	I1213 08:47:53.516585   42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:47:53.516640   42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:47:53.544885   42018 cri.go:89] found id: ""
	I1213 08:47:53.544899   42018 logs.go:282] 0 containers: []
	W1213 08:47:53.544905   42018 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:47:53.544910   42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:47:53.544966   42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:47:53.571557   42018 cri.go:89] found id: ""
	I1213 08:47:53.571570   42018 logs.go:282] 0 containers: []
	W1213 08:47:53.571577   42018 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:47:53.571582   42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:47:53.571641   42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:47:53.597400   42018 cri.go:89] found id: ""
	I1213 08:47:53.597414   42018 logs.go:282] 0 containers: []
	W1213 08:47:53.597420   42018 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:47:53.597426   42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:47:53.597482   42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:47:53.620080   42018 cri.go:89] found id: ""
	I1213 08:47:53.620093   42018 logs.go:282] 0 containers: []
	W1213 08:47:53.620099   42018 logs.go:284] No container was found matching "kindnet"
	I1213 08:47:53.620107   42018 logs.go:123] Gathering logs for kubelet ...
	I1213 08:47:53.620117   42018 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:47:53.676418   42018 logs.go:123] Gathering logs for dmesg ...
	I1213 08:47:53.676436   42018 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:47:53.687053   42018 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:47:53.687068   42018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:47:53.750366   42018 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:47:53.741274    4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:47:53.742022    4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:47:53.743792    4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:47:53.744434    4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:47:53.745919    4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:47:53.741274    4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:47:53.742022    4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:47:53.743792    4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:47:53.744434    4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:47:53.745919    4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:47:53.750377   42018 logs.go:123] Gathering logs for containerd ...
	I1213 08:47:53.750387   42018 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:47:53.790702   42018 logs.go:123] Gathering logs for container status ...
	I1213 08:47:53.790722   42018 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 08:47:53.821228   42018 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001208507s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 08:47:53.821260   42018 out.go:285] * 
	W1213 08:47:53.821320   42018 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001208507s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 08:47:53.821338   42018 out.go:285] * 
	W1213 08:47:53.823454   42018 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:47:53.829350   42018 out.go:203] 
	W1213 08:47:53.833068   42018 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001208507s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 08:47:53.833383   42018 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 08:47:53.833466   42018 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 08:47:53.838401   42018 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.319755842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.319823584Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.319919946Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.319982305Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320015446Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320031405Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320041777Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320055791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320074958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320113260Z" level=info msg="Connect containerd service"
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320426706Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.321024610Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.333892514Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.334078049Z" level=info msg="Start subscribing containerd event"
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.334228073Z" level=info msg="Start recovering state"
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.334161964Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.372946246Z" level=info msg="Start event monitor"
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.373143261Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.373208427Z" level=info msg="Start streaming server"
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.373276907Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.373335952Z" level=info msg="runtime interface starting up..."
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.373390885Z" level=info msg="starting plugins..."
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.373456141Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 08:39:46 functional-074420 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.375241885Z" level=info msg="containerd successfully booted in 0.081295s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:47:54.798023    4884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:47:54.798564    4884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:47:54.800057    4884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:47:54.800486    4884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:47:54.801944    4884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 08:47:54 up 30 min,  0 user,  load average: 0.47, 0.54, 0.65
	Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 08:47:51 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:47:52 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 13 08:47:52 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:47:52 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:47:52 functional-074420 kubelet[4686]: E1213 08:47:52.322071    4686 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:47:52 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:47:52 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:47:53 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 08:47:53 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:47:53 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:47:53 functional-074420 kubelet[4692]: E1213 08:47:53.068125    4692 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:47:53 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:47:53 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:47:53 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 08:47:53 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:47:53 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:47:53 functional-074420 kubelet[4768]: E1213 08:47:53.843780    4768 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:47:53 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:47:53 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:47:54 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 08:47:54 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:47:54 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:47:54 functional-074420 kubelet[4819]: E1213 08:47:54.593626    4819 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:47:54 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:47:54 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 6 (399.858737ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 08:47:55.311702   47709 status.go:458] kubeconfig endpoint: get endpoint: "functional-074420" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (500.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 08:47:55.326928    4120 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-074420 --alsologtostderr -v=8
E1213 08:48:51.888735    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:49:19.589023    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:52:14.443291    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:53:37.516952    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:53:51.887810    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-074420 --alsologtostderr -v=8: exit status 80 (6m5.467109162s)

                                                
                                                
-- stdout --
	* [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:47:55.372522   47783 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:47:55.372733   47783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:47:55.372759   47783 out.go:374] Setting ErrFile to fd 2...
	I1213 08:47:55.372779   47783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:47:55.373071   47783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:47:55.373500   47783 out.go:368] Setting JSON to false
	I1213 08:47:55.374339   47783 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1828,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:47:55.374435   47783 start.go:143] virtualization:  
	I1213 08:47:55.378014   47783 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:47:55.381059   47783 notify.go:221] Checking for updates...
	I1213 08:47:55.381456   47783 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:47:55.384645   47783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:47:55.387475   47783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:55.390285   47783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:47:55.393179   47783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 08:47:55.396170   47783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:47:55.399625   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:55.399723   47783 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:47:55.421152   47783 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:47:55.421278   47783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:47:55.479286   47783 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 08:47:55.469949512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:47:55.479410   47783 docker.go:319] overlay module found
	I1213 08:47:55.482469   47783 out.go:179] * Using the docker driver based on existing profile
	I1213 08:47:55.485237   47783 start.go:309] selected driver: docker
	I1213 08:47:55.485259   47783 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:55.485359   47783 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:47:55.485469   47783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:47:55.552137   47783 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 08:47:55.542465837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:47:55.552549   47783 cni.go:84] Creating CNI manager for ""
	I1213 08:47:55.552614   47783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:47:55.552664   47783 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:55.555904   47783 out.go:179] * Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	I1213 08:47:55.558801   47783 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 08:47:55.561846   47783 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:47:55.564866   47783 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:47:55.564922   47783 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 08:47:55.564938   47783 cache.go:65] Caching tarball of preloaded images
	I1213 08:47:55.564963   47783 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:47:55.565027   47783 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 08:47:55.565039   47783 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 08:47:55.565188   47783 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
	I1213 08:47:55.585020   47783 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:47:55.585044   47783 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:47:55.585064   47783 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:47:55.585094   47783 start.go:360] acquireMachinesLock for functional-074420: {Name:mk9a8356bf81e58530d2c2996b4da0b7487171c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:47:55.585169   47783 start.go:364] duration metric: took 45.161µs to acquireMachinesLock for "functional-074420"
	I1213 08:47:55.585195   47783 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:47:55.585204   47783 fix.go:54] fixHost starting: 
	I1213 08:47:55.585456   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:55.601925   47783 fix.go:112] recreateIfNeeded on functional-074420: state=Running err=<nil>
	W1213 08:47:55.601956   47783 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:47:55.605110   47783 out.go:252] * Updating the running docker "functional-074420" container ...
	I1213 08:47:55.605143   47783 machine.go:94] provisionDockerMachine start ...
	I1213 08:47:55.605228   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.622184   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.622521   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.622536   47783 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:47:55.770899   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:47:55.770923   47783 ubuntu.go:182] provisioning hostname "functional-074420"
	I1213 08:47:55.770990   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.788917   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.789224   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.789243   47783 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-074420 && echo "functional-074420" | sudo tee /etc/hostname
	I1213 08:47:55.944141   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:47:55.944216   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.963276   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.963669   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.963693   47783 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-074420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-074420/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-074420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:47:56.123813   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:47:56.123839   47783 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 08:47:56.123865   47783 ubuntu.go:190] setting up certificates
	I1213 08:47:56.123875   47783 provision.go:84] configureAuth start
	I1213 08:47:56.123935   47783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:47:56.141934   47783 provision.go:143] copyHostCerts
	I1213 08:47:56.141983   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:47:56.142030   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 08:47:56.142044   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:47:56.142121   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 08:47:56.142216   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:47:56.142238   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 08:47:56.142247   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:47:56.142276   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 08:47:56.142329   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:47:56.142361   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 08:47:56.142370   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:47:56.142397   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 08:47:56.142457   47783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.functional-074420 san=[127.0.0.1 192.168.49.2 functional-074420 localhost minikube]
	I1213 08:47:56.320875   47783 provision.go:177] copyRemoteCerts
	I1213 08:47:56.320949   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:47:56.320994   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.338054   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.442993   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 08:47:56.443052   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:47:56.459467   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 08:47:56.459650   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:47:56.476836   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 08:47:56.476894   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 08:47:56.494408   47783 provision.go:87] duration metric: took 370.509157ms to configureAuth
	I1213 08:47:56.494435   47783 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:47:56.494611   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:56.494624   47783 machine.go:97] duration metric: took 889.474725ms to provisionDockerMachine
	I1213 08:47:56.494633   47783 start.go:293] postStartSetup for "functional-074420" (driver="docker")
	I1213 08:47:56.494644   47783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:47:56.494700   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:47:56.494748   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.511710   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.615158   47783 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:47:56.618357   47783 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 08:47:56.618378   47783 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 08:47:56.618383   47783 command_runner.go:130] > VERSION_ID="12"
	I1213 08:47:56.618388   47783 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 08:47:56.618392   47783 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 08:47:56.618422   47783 command_runner.go:130] > ID=debian
	I1213 08:47:56.618436   47783 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 08:47:56.618441   47783 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 08:47:56.618448   47783 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 08:47:56.618517   47783 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:47:56.618537   47783 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:47:56.618550   47783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 08:47:56.618607   47783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 08:47:56.618691   47783 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 08:47:56.618702   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> /etc/ssl/certs/41202.pem
	I1213 08:47:56.618783   47783 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1213 08:47:56.618792   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> /etc/test/nested/copy/4120/hosts
	I1213 08:47:56.618842   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1213 08:47:56.626162   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:47:56.643608   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1213 08:47:56.661460   47783 start.go:296] duration metric: took 166.811201ms for postStartSetup
	I1213 08:47:56.661553   47783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:47:56.661603   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.678627   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.785005   47783 command_runner.go:130] > 14%
	I1213 08:47:56.785418   47783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:47:56.789762   47783 command_runner.go:130] > 169G
	I1213 08:47:56.790146   47783 fix.go:56] duration metric: took 1.204938515s for fixHost
	I1213 08:47:56.790168   47783 start.go:83] releasing machines lock for "functional-074420", held for 1.204983079s
	I1213 08:47:56.790231   47783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:47:56.811813   47783 ssh_runner.go:195] Run: cat /version.json
	I1213 08:47:56.811877   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.812180   47783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:47:56.812227   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.839131   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.843453   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:57.035647   47783 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 08:47:57.038511   47783 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 08:47:57.038690   47783 ssh_runner.go:195] Run: systemctl --version
	I1213 08:47:57.044708   47783 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 08:47:57.044761   47783 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 08:47:57.045134   47783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 08:47:57.049401   47783 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 08:47:57.049443   47783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:47:57.049503   47783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:47:57.057127   47783 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:47:57.057158   47783 start.go:496] detecting cgroup driver to use...
	I1213 08:47:57.057211   47783 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:47:57.057279   47783 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 08:47:57.072743   47783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:47:57.086014   47783 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:47:57.086118   47783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:47:57.102029   47783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:47:57.115088   47783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:47:57.226726   47783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:47:57.347870   47783 docker.go:234] disabling docker service ...
	I1213 08:47:57.347940   47783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:47:57.363202   47783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:47:57.377010   47783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:47:57.506500   47783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:47:57.649131   47783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:47:57.662497   47783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:47:57.677018   47783 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 08:47:57.678207   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:47:57.688555   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 08:47:57.698272   47783 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:47:57.698370   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:47:57.707500   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:47:57.716692   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:47:57.725739   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:47:57.734886   47783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:47:57.743485   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:47:57.753073   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:47:57.761993   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:47:57.770719   47783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:47:57.777695   47783 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 08:47:57.778683   47783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:47:57.786237   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:57.908393   47783 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:47:58.046253   47783 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 08:47:58.046368   47783 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 08:47:58.050493   47783 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 08:47:58.050558   47783 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 08:47:58.050578   47783 command_runner.go:130] > Device: 0,73	Inode: 1610        Links: 1
	I1213 08:47:58.050603   47783 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:47:58.050636   47783 command_runner.go:130] > Access: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050663   47783 command_runner.go:130] > Modify: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050685   47783 command_runner.go:130] > Change: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050720   47783 command_runner.go:130] >  Birth: -
	I1213 08:47:58.050927   47783 start.go:564] Will wait 60s for crictl version
	I1213 08:47:58.051002   47783 ssh_runner.go:195] Run: which crictl
	I1213 08:47:58.054661   47783 command_runner.go:130] > /usr/local/bin/crictl
	I1213 08:47:58.054852   47783 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:47:58.077876   47783 command_runner.go:130] > Version:  0.1.0
	I1213 08:47:58.077939   47783 command_runner.go:130] > RuntimeName:  containerd
	I1213 08:47:58.077961   47783 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 08:47:58.077985   47783 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 08:47:58.080051   47783 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 08:47:58.080159   47783 ssh_runner.go:195] Run: containerd --version
	I1213 08:47:58.100302   47783 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 08:47:58.101953   47783 ssh_runner.go:195] Run: containerd --version
	I1213 08:47:58.119235   47783 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 08:47:58.126521   47783 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 08:47:58.129463   47783 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:47:58.145273   47783 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 08:47:58.149369   47783 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 08:47:58.149453   47783 kubeadm.go:884] updating cluster {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:47:58.149580   47783 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:47:58.149657   47783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:47:58.174191   47783 command_runner.go:130] > {
	I1213 08:47:58.174214   47783 command_runner.go:130] >   "images":  [
	I1213 08:47:58.174219   47783 command_runner.go:130] >     {
	I1213 08:47:58.174232   47783 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 08:47:58.174237   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174242   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 08:47:58.174246   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174250   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174259   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 08:47:58.174263   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174267   47783 command_runner.go:130] >       "size":  "40636774",
	I1213 08:47:58.174271   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174275   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174278   47783 command_runner.go:130] >     },
	I1213 08:47:58.174281   47783 command_runner.go:130] >     {
	I1213 08:47:58.174289   47783 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 08:47:58.174299   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174305   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 08:47:58.174308   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174313   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174321   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 08:47:58.174328   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174332   47783 command_runner.go:130] >       "size":  "8034419",
	I1213 08:47:58.174336   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174340   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174343   47783 command_runner.go:130] >     },
	I1213 08:47:58.174349   47783 command_runner.go:130] >     {
	I1213 08:47:58.174356   47783 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 08:47:58.174361   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174366   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 08:47:58.174371   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174384   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174395   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 08:47:58.174399   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174403   47783 command_runner.go:130] >       "size":  "21168808",
	I1213 08:47:58.174409   47783 command_runner.go:130] >       "username":  "nonroot",
	I1213 08:47:58.174417   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174421   47783 command_runner.go:130] >     },
	I1213 08:47:58.174424   47783 command_runner.go:130] >     {
	I1213 08:47:58.174430   47783 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 08:47:58.174436   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174441   47783 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 08:47:58.174444   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174449   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174458   47783 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 08:47:58.174464   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174468   47783 command_runner.go:130] >       "size":  "21136588",
	I1213 08:47:58.174472   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174475   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174478   47783 command_runner.go:130] >       },
	I1213 08:47:58.174487   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174491   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174494   47783 command_runner.go:130] >     },
	I1213 08:47:58.174497   47783 command_runner.go:130] >     {
	I1213 08:47:58.174507   47783 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 08:47:58.174511   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174518   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 08:47:58.174522   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174526   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174533   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 08:47:58.174539   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174545   47783 command_runner.go:130] >       "size":  "24678359",
	I1213 08:47:58.174551   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174559   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174562   47783 command_runner.go:130] >       },
	I1213 08:47:58.174566   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174576   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174580   47783 command_runner.go:130] >     },
	I1213 08:47:58.174584   47783 command_runner.go:130] >     {
	I1213 08:47:58.174594   47783 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 08:47:58.174601   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174607   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 08:47:58.174610   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174614   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174625   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 08:47:58.174631   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174635   47783 command_runner.go:130] >       "size":  "20661043",
	I1213 08:47:58.174638   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174642   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174645   47783 command_runner.go:130] >       },
	I1213 08:47:58.174649   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174655   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174659   47783 command_runner.go:130] >     },
	I1213 08:47:58.174663   47783 command_runner.go:130] >     {
	I1213 08:47:58.174671   47783 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 08:47:58.174677   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174681   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 08:47:58.174684   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174688   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174699   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 08:47:58.174704   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174709   47783 command_runner.go:130] >       "size":  "22429671",
	I1213 08:47:58.174713   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174716   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174721   47783 command_runner.go:130] >     },
	I1213 08:47:58.174725   47783 command_runner.go:130] >     {
	I1213 08:47:58.174732   47783 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 08:47:58.174742   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174747   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 08:47:58.174753   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174757   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174765   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 08:47:58.174774   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174781   47783 command_runner.go:130] >       "size":  "15391364",
	I1213 08:47:58.174784   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174788   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174797   47783 command_runner.go:130] >       },
	I1213 08:47:58.174802   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174808   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174811   47783 command_runner.go:130] >     },
	I1213 08:47:58.174814   47783 command_runner.go:130] >     {
	I1213 08:47:58.174821   47783 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 08:47:58.174828   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174833   47783 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 08:47:58.174836   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174840   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174848   47783 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 08:47:58.174851   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174855   47783 command_runner.go:130] >       "size":  "267939",
	I1213 08:47:58.174860   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174864   47783 command_runner.go:130] >         "value":  "65535"
	I1213 08:47:58.174875   47783 command_runner.go:130] >       },
	I1213 08:47:58.174880   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174884   47783 command_runner.go:130] >       "pinned":  true
	I1213 08:47:58.174887   47783 command_runner.go:130] >     }
	I1213 08:47:58.174890   47783 command_runner.go:130] >   ]
	I1213 08:47:58.174893   47783 command_runner.go:130] > }
	I1213 08:47:58.175043   47783 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:47:58.175056   47783 containerd.go:534] Images already preloaded, skipping extraction
	I1213 08:47:58.175117   47783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:47:58.196592   47783 command_runner.go:130] > {
	I1213 08:47:58.196612   47783 command_runner.go:130] >   "images":  [
	I1213 08:47:58.196616   47783 command_runner.go:130] >     {
	I1213 08:47:58.196626   47783 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 08:47:58.196631   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196637   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 08:47:58.196641   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196644   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196654   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 08:47:58.196660   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196664   47783 command_runner.go:130] >       "size":  "40636774",
	I1213 08:47:58.196674   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196678   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196682   47783 command_runner.go:130] >     },
	I1213 08:47:58.196685   47783 command_runner.go:130] >     {
	I1213 08:47:58.196701   47783 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 08:47:58.196710   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196715   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 08:47:58.196719   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196723   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196732   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 08:47:58.196739   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196745   47783 command_runner.go:130] >       "size":  "8034419",
	I1213 08:47:58.196753   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196757   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196764   47783 command_runner.go:130] >     },
	I1213 08:47:58.196768   47783 command_runner.go:130] >     {
	I1213 08:47:58.196782   47783 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 08:47:58.196787   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196793   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 08:47:58.196798   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196807   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196825   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 08:47:58.196833   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196838   47783 command_runner.go:130] >       "size":  "21168808",
	I1213 08:47:58.196847   47783 command_runner.go:130] >       "username":  "nonroot",
	I1213 08:47:58.196852   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196861   47783 command_runner.go:130] >     },
	I1213 08:47:58.196864   47783 command_runner.go:130] >     {
	I1213 08:47:58.196871   47783 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 08:47:58.196875   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196880   47783 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 08:47:58.196884   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196888   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196897   47783 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 08:47:58.196904   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196908   47783 command_runner.go:130] >       "size":  "21136588",
	I1213 08:47:58.196912   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.196916   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.196924   47783 command_runner.go:130] >       },
	I1213 08:47:58.196929   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196936   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196940   47783 command_runner.go:130] >     },
	I1213 08:47:58.196943   47783 command_runner.go:130] >     {
	I1213 08:47:58.196953   47783 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 08:47:58.196958   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196963   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 08:47:58.196968   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196973   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196984   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 08:47:58.196993   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196998   47783 command_runner.go:130] >       "size":  "24678359",
	I1213 08:47:58.197005   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197015   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197022   47783 command_runner.go:130] >       },
	I1213 08:47:58.197030   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197034   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197037   47783 command_runner.go:130] >     },
	I1213 08:47:58.197040   47783 command_runner.go:130] >     {
	I1213 08:47:58.197048   47783 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 08:47:58.197056   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197063   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 08:47:58.197069   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197074   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197086   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 08:47:58.197094   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197098   47783 command_runner.go:130] >       "size":  "20661043",
	I1213 08:47:58.197105   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197109   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197113   47783 command_runner.go:130] >       },
	I1213 08:47:58.197117   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197121   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197126   47783 command_runner.go:130] >     },
	I1213 08:47:58.197129   47783 command_runner.go:130] >     {
	I1213 08:47:58.197140   47783 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 08:47:58.197144   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197154   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 08:47:58.197158   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197162   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197173   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 08:47:58.197180   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197185   47783 command_runner.go:130] >       "size":  "22429671",
	I1213 08:47:58.197189   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197194   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197198   47783 command_runner.go:130] >     },
	I1213 08:47:58.197201   47783 command_runner.go:130] >     {
	I1213 08:47:58.197209   47783 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 08:47:58.197216   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197225   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 08:47:58.197232   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197237   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197248   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 08:47:58.197255   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197259   47783 command_runner.go:130] >       "size":  "15391364",
	I1213 08:47:58.197266   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197270   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197273   47783 command_runner.go:130] >       },
	I1213 08:47:58.197279   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197283   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197286   47783 command_runner.go:130] >     },
	I1213 08:47:58.197289   47783 command_runner.go:130] >     {
	I1213 08:47:58.197296   47783 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 08:47:58.197304   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197309   47783 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 08:47:58.197313   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197320   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197329   47783 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 08:47:58.197335   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197339   47783 command_runner.go:130] >       "size":  "267939",
	I1213 08:47:58.197346   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197351   47783 command_runner.go:130] >         "value":  "65535"
	I1213 08:47:58.197356   47783 command_runner.go:130] >       },
	I1213 08:47:58.197362   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197366   47783 command_runner.go:130] >       "pinned":  true
	I1213 08:47:58.197369   47783 command_runner.go:130] >     }
	I1213 08:47:58.197372   47783 command_runner.go:130] >   ]
	I1213 08:47:58.197375   47783 command_runner.go:130] > }
	I1213 08:47:58.199421   47783 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:47:58.199439   47783 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:47:58.199455   47783 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 08:47:58.199601   47783 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-074420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:47:58.199669   47783 ssh_runner.go:195] Run: sudo crictl info
	I1213 08:47:58.226280   47783 command_runner.go:130] > {
	I1213 08:47:58.226303   47783 command_runner.go:130] >   "cniconfig": {
	I1213 08:47:58.226310   47783 command_runner.go:130] >     "Networks": [
	I1213 08:47:58.226314   47783 command_runner.go:130] >       {
	I1213 08:47:58.226319   47783 command_runner.go:130] >         "Config": {
	I1213 08:47:58.226324   47783 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 08:47:58.226329   47783 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 08:47:58.226333   47783 command_runner.go:130] >           "Plugins": [
	I1213 08:47:58.226336   47783 command_runner.go:130] >             {
	I1213 08:47:58.226340   47783 command_runner.go:130] >               "Network": {
	I1213 08:47:58.226344   47783 command_runner.go:130] >                 "ipam": {},
	I1213 08:47:58.226350   47783 command_runner.go:130] >                 "type": "loopback"
	I1213 08:47:58.226358   47783 command_runner.go:130] >               },
	I1213 08:47:58.226364   47783 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 08:47:58.226371   47783 command_runner.go:130] >             }
	I1213 08:47:58.226374   47783 command_runner.go:130] >           ],
	I1213 08:47:58.226384   47783 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 08:47:58.226388   47783 command_runner.go:130] >         },
	I1213 08:47:58.226398   47783 command_runner.go:130] >         "IFName": "lo"
	I1213 08:47:58.226402   47783 command_runner.go:130] >       }
	I1213 08:47:58.226405   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226410   47783 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 08:47:58.226415   47783 command_runner.go:130] >     "PluginDirs": [
	I1213 08:47:58.226419   47783 command_runner.go:130] >       "/opt/cni/bin"
	I1213 08:47:58.226425   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226430   47783 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 08:47:58.226442   47783 command_runner.go:130] >     "Prefix": "eth"
	I1213 08:47:58.226445   47783 command_runner.go:130] >   },
	I1213 08:47:58.226448   47783 command_runner.go:130] >   "config": {
	I1213 08:47:58.226454   47783 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 08:47:58.226459   47783 command_runner.go:130] >       "/etc/cdi",
	I1213 08:47:58.226466   47783 command_runner.go:130] >       "/var/run/cdi"
	I1213 08:47:58.226472   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226480   47783 command_runner.go:130] >     "cni": {
	I1213 08:47:58.226484   47783 command_runner.go:130] >       "binDir": "",
	I1213 08:47:58.226487   47783 command_runner.go:130] >       "binDirs": [
	I1213 08:47:58.226491   47783 command_runner.go:130] >         "/opt/cni/bin"
	I1213 08:47:58.226495   47783 command_runner.go:130] >       ],
	I1213 08:47:58.226499   47783 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 08:47:58.226503   47783 command_runner.go:130] >       "confTemplate": "",
	I1213 08:47:58.226507   47783 command_runner.go:130] >       "ipPref": "",
	I1213 08:47:58.226510   47783 command_runner.go:130] >       "maxConfNum": 1,
	I1213 08:47:58.226514   47783 command_runner.go:130] >       "setupSerially": false,
	I1213 08:47:58.226519   47783 command_runner.go:130] >       "useInternalLoopback": false
	I1213 08:47:58.226524   47783 command_runner.go:130] >     },
	I1213 08:47:58.226530   47783 command_runner.go:130] >     "containerd": {
	I1213 08:47:58.226538   47783 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 08:47:58.226543   47783 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 08:47:58.226548   47783 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 08:47:58.226552   47783 command_runner.go:130] >       "runtimes": {
	I1213 08:47:58.226557   47783 command_runner.go:130] >         "runc": {
	I1213 08:47:58.226562   47783 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 08:47:58.226566   47783 command_runner.go:130] >           "PodAnnotations": null,
	I1213 08:47:58.226570   47783 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 08:47:58.226575   47783 command_runner.go:130] >           "cgroupWritable": false,
	I1213 08:47:58.226580   47783 command_runner.go:130] >           "cniConfDir": "",
	I1213 08:47:58.226586   47783 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 08:47:58.226591   47783 command_runner.go:130] >           "io_type": "",
	I1213 08:47:58.226596   47783 command_runner.go:130] >           "options": {
	I1213 08:47:58.226601   47783 command_runner.go:130] >             "BinaryName": "",
	I1213 08:47:58.226607   47783 command_runner.go:130] >             "CriuImagePath": "",
	I1213 08:47:58.226612   47783 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 08:47:58.226616   47783 command_runner.go:130] >             "IoGid": 0,
	I1213 08:47:58.226620   47783 command_runner.go:130] >             "IoUid": 0,
	I1213 08:47:58.226629   47783 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 08:47:58.226633   47783 command_runner.go:130] >             "Root": "",
	I1213 08:47:58.226641   47783 command_runner.go:130] >             "ShimCgroup": "",
	I1213 08:47:58.226649   47783 command_runner.go:130] >             "SystemdCgroup": false
	I1213 08:47:58.226652   47783 command_runner.go:130] >           },
	I1213 08:47:58.226657   47783 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 08:47:58.226666   47783 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 08:47:58.226678   47783 command_runner.go:130] >           "runtimePath": "",
	I1213 08:47:58.226683   47783 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 08:47:58.226689   47783 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 08:47:58.226698   47783 command_runner.go:130] >           "snapshotter": ""
	I1213 08:47:58.226702   47783 command_runner.go:130] >         }
	I1213 08:47:58.226705   47783 command_runner.go:130] >       }
	I1213 08:47:58.226710   47783 command_runner.go:130] >     },
	I1213 08:47:58.226721   47783 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 08:47:58.226728   47783 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 08:47:58.226735   47783 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 08:47:58.226739   47783 command_runner.go:130] >     "disableApparmor": false,
	I1213 08:47:58.226744   47783 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 08:47:58.226748   47783 command_runner.go:130] >     "disableProcMount": false,
	I1213 08:47:58.226753   47783 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 08:47:58.226759   47783 command_runner.go:130] >     "enableCDI": true,
	I1213 08:47:58.226763   47783 command_runner.go:130] >     "enableSelinux": false,
	I1213 08:47:58.226769   47783 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 08:47:58.226775   47783 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 08:47:58.226782   47783 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 08:47:58.226787   47783 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 08:47:58.226797   47783 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 08:47:58.226806   47783 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 08:47:58.226811   47783 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 08:47:58.226819   47783 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 08:47:58.226824   47783 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 08:47:58.226830   47783 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 08:47:58.226837   47783 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 08:47:58.226843   47783 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 08:47:58.226853   47783 command_runner.go:130] >   },
	I1213 08:47:58.226860   47783 command_runner.go:130] >   "features": {
	I1213 08:47:58.226865   47783 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 08:47:58.226868   47783 command_runner.go:130] >   },
	I1213 08:47:58.226872   47783 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 08:47:58.226884   47783 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 08:47:58.226898   47783 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 08:47:58.226903   47783 command_runner.go:130] >   "runtimeHandlers": [
	I1213 08:47:58.226906   47783 command_runner.go:130] >     {
	I1213 08:47:58.226910   47783 command_runner.go:130] >       "features": {
	I1213 08:47:58.226915   47783 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 08:47:58.226921   47783 command_runner.go:130] >         "user_namespaces": true
	I1213 08:47:58.226925   47783 command_runner.go:130] >       }
	I1213 08:47:58.226928   47783 command_runner.go:130] >     },
	I1213 08:47:58.226934   47783 command_runner.go:130] >     {
	I1213 08:47:58.226938   47783 command_runner.go:130] >       "features": {
	I1213 08:47:58.226946   47783 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 08:47:58.226958   47783 command_runner.go:130] >         "user_namespaces": true
	I1213 08:47:58.226962   47783 command_runner.go:130] >       },
	I1213 08:47:58.226965   47783 command_runner.go:130] >       "name": "runc"
	I1213 08:47:58.226968   47783 command_runner.go:130] >     }
	I1213 08:47:58.226971   47783 command_runner.go:130] >   ],
	I1213 08:47:58.226976   47783 command_runner.go:130] >   "status": {
	I1213 08:47:58.226984   47783 command_runner.go:130] >     "conditions": [
	I1213 08:47:58.226989   47783 command_runner.go:130] >       {
	I1213 08:47:58.226993   47783 command_runner.go:130] >         "message": "",
	I1213 08:47:58.226997   47783 command_runner.go:130] >         "reason": "",
	I1213 08:47:58.227001   47783 command_runner.go:130] >         "status": true,
	I1213 08:47:58.227009   47783 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 08:47:58.227015   47783 command_runner.go:130] >       },
	I1213 08:47:58.227019   47783 command_runner.go:130] >       {
	I1213 08:47:58.227033   47783 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 08:47:58.227038   47783 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 08:47:58.227046   47783 command_runner.go:130] >         "status": false,
	I1213 08:47:58.227054   47783 command_runner.go:130] >         "type": "NetworkReady"
	I1213 08:47:58.227057   47783 command_runner.go:130] >       },
	I1213 08:47:58.227060   47783 command_runner.go:130] >       {
	I1213 08:47:58.227083   47783 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 08:47:58.227094   47783 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 08:47:58.227100   47783 command_runner.go:130] >         "status": false,
	I1213 08:47:58.227106   47783 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 08:47:58.227111   47783 command_runner.go:130] >       }
	I1213 08:47:58.227115   47783 command_runner.go:130] >     ]
	I1213 08:47:58.227118   47783 command_runner.go:130] >   }
	I1213 08:47:58.227121   47783 command_runner.go:130] > }
	I1213 08:47:58.229345   47783 cni.go:84] Creating CNI manager for ""
	I1213 08:47:58.229369   47783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:47:58.229387   47783 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:47:58.229409   47783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-074420 NodeName:functional-074420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:47:58.229527   47783 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-074420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:47:58.229596   47783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:47:58.237061   47783 command_runner.go:130] > kubeadm
	I1213 08:47:58.237081   47783 command_runner.go:130] > kubectl
	I1213 08:47:58.237086   47783 command_runner.go:130] > kubelet
	I1213 08:47:58.237099   47783 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:47:58.237151   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:47:58.244326   47783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 08:47:58.256951   47783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:47:58.269808   47783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 08:47:58.282145   47783 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:47:58.286872   47783 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 08:47:58.287376   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:58.410199   47783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:47:59.022103   47783 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420 for IP: 192.168.49.2
	I1213 08:47:59.022125   47783 certs.go:195] generating shared ca certs ...
	I1213 08:47:59.022141   47783 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.022352   47783 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 08:47:59.022424   47783 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 08:47:59.022444   47783 certs.go:257] generating profile certs ...
	I1213 08:47:59.022584   47783 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key
	I1213 08:47:59.022699   47783 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068
	I1213 08:47:59.022768   47783 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key
	I1213 08:47:59.022808   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 08:47:59.022855   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 08:47:59.022876   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 08:47:59.022904   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 08:47:59.022937   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 08:47:59.022973   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 08:47:59.022995   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 08:47:59.023008   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 08:47:59.023095   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 08:47:59.023154   47783 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 08:47:59.023166   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:47:59.023224   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 08:47:59.023288   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:47:59.023328   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 08:47:59.023408   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:47:59.023471   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem -> /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.023492   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.023541   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.024142   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:47:59.045491   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:47:59.066181   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:47:59.087256   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 08:47:59.105383   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:47:59.122457   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:47:59.141188   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:47:59.160057   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:47:59.177518   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 08:47:59.194757   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 08:47:59.211990   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:47:59.231728   47783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:47:59.244528   47783 ssh_runner.go:195] Run: openssl version
	I1213 08:47:59.250389   47783 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 08:47:59.250777   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.258690   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 08:47:59.266115   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269715   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269750   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269798   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.310445   47783 command_runner.go:130] > 51391683
	I1213 08:47:59.310954   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:47:59.318044   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.325154   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 08:47:59.332532   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336318   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336361   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336416   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.376950   47783 command_runner.go:130] > 3ec20f2e
	I1213 08:47:59.377430   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:47:59.384916   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.392420   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:47:59.399763   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403540   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403584   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403630   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.443918   47783 command_runner.go:130] > b5213941
	I1213 08:47:59.444419   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:47:59.451702   47783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:47:59.455380   47783 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:47:59.455462   47783 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 08:47:59.455488   47783 command_runner.go:130] > Device: 259,1	Inode: 1311318     Links: 1
	I1213 08:47:59.455502   47783 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:47:59.455526   47783 command_runner.go:130] > Access: 2025-12-13 08:43:51.909308195 +0000
	I1213 08:47:59.455533   47783 command_runner.go:130] > Modify: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455538   47783 command_runner.go:130] > Change: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455544   47783 command_runner.go:130] >  Birth: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455631   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:47:59.496226   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.496712   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:47:59.538384   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.538813   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:47:59.584114   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.584598   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:47:59.624635   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.625106   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:47:59.665474   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.665947   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:47:59.706066   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.706546   47783 kubeadm.go:401] StartCluster: {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:59.706648   47783 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 08:47:59.706732   47783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:47:59.735062   47783 cri.go:89] found id: ""
	I1213 08:47:59.735134   47783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:47:59.742080   47783 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 08:47:59.742103   47783 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 08:47:59.742110   47783 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 08:47:59.743039   47783 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:47:59.743056   47783 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:47:59.743123   47783 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:47:59.750746   47783 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:47:59.751192   47783 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-074420" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.751301   47783 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "functional-074420" cluster setting kubeconfig missing "functional-074420" context setting]
	I1213 08:47:59.751688   47783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.752162   47783 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.752336   47783 kapi.go:59] client config for functional-074420: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key", CAFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:47:59.752888   47783 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 08:47:59.752908   47783 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 08:47:59.752914   47783 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 08:47:59.752919   47783 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 08:47:59.752923   47783 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 08:47:59.753010   47783 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:47:59.753251   47783 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:47:59.761240   47783 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 08:47:59.761275   47783 kubeadm.go:602] duration metric: took 18.213538ms to restartPrimaryControlPlane
	I1213 08:47:59.761286   47783 kubeadm.go:403] duration metric: took 54.748002ms to StartCluster
	I1213 08:47:59.761334   47783 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.761412   47783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.762024   47783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.762236   47783 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 08:47:59.762588   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:59.762635   47783 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 08:47:59.762697   47783 addons.go:70] Setting storage-provisioner=true in profile "functional-074420"
	I1213 08:47:59.762710   47783 addons.go:239] Setting addon storage-provisioner=true in "functional-074420"
	I1213 08:47:59.762736   47783 host.go:66] Checking if "functional-074420" exists ...
	I1213 08:47:59.762848   47783 addons.go:70] Setting default-storageclass=true in profile "functional-074420"
	I1213 08:47:59.762897   47783 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-074420"
	I1213 08:47:59.763226   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.763230   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.768637   47783 out.go:179] * Verifying Kubernetes components...
	I1213 08:47:59.771460   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:59.801964   47783 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.802130   47783 kapi.go:59] client config for functional-074420: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key", CAFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:47:59.802416   47783 addons.go:239] Setting addon default-storageclass=true in "functional-074420"
	I1213 08:47:59.802452   47783 host.go:66] Checking if "functional-074420" exists ...
	I1213 08:47:59.802879   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.817615   47783 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:47:59.820407   47783 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:47:59.820438   47783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:47:59.820510   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:59.832904   47783 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:47:59.832927   47783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:47:59.832987   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:59.858620   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:59.867019   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:48:00.019931   47783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:48:00.079586   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:00.079699   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:00.772755   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.772781   47783 node_ready.go:35] waiting up to 6m0s for node "functional-074420" to be "Ready" ...
	W1213 08:48:00.772842   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.772962   47783 retry.go:31] will retry after 342.791424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773029   47783 type.go:168] "Request Body" body=""
	I1213 08:48:00.773112   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:00.773133   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773144   47783 retry.go:31] will retry after 244.896783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:00.773482   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.019052   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.079123   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.079165   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.079186   47783 retry.go:31] will retry after 233.412949ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.116509   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:01.177616   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.181525   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.181562   47783 retry.go:31] will retry after 544.217788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.273820   47783 type.go:168] "Request Body" body=""
	I1213 08:48:01.273908   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:01.274281   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.313528   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.373257   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.376997   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.377026   47783 retry.go:31] will retry after 483.901383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.726523   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:01.774029   47783 type.go:168] "Request Body" body=""
	I1213 08:48:01.774123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:01.774536   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.788802   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.792516   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.792575   47783 retry.go:31] will retry after 627.991267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.861830   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.921846   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.925982   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.926017   47783 retry.go:31] will retry after 1.103907842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.273073   47783 type.go:168] "Request Body" body=""
	I1213 08:48:02.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:02.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:02.420977   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:02.487960   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:02.491818   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.491849   47783 retry.go:31] will retry after 452.917795ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.773092   47783 type.go:168] "Request Body" body=""
	I1213 08:48:02.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:02.773507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:02.773572   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:02.945881   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:03.009201   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:03.013021   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.013052   47783 retry.go:31] will retry after 1.276929732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.030115   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:03.100586   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:03.104547   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.104578   47783 retry.go:31] will retry after 1.048810244s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.273922   47783 type.go:168] "Request Body" body=""
	I1213 08:48:03.274012   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:03.274318   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:03.773006   47783 type.go:168] "Request Body" body=""
	I1213 08:48:03.773078   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:03.773422   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:04.153636   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:04.212539   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:04.212608   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.212632   47783 retry.go:31] will retry after 1.498415757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.273795   47783 type.go:168] "Request Body" body=""
	I1213 08:48:04.273919   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:04.274275   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:04.290503   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:04.351966   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:04.352013   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.352031   47783 retry.go:31] will retry after 2.776026758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.773561   47783 type.go:168] "Request Body" body=""
	I1213 08:48:04.773631   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:04.773950   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:04.774040   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:05.273769   47783 type.go:168] "Request Body" body=""
	I1213 08:48:05.273843   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:05.274174   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:05.711960   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:05.773532   47783 type.go:168] "Request Body" body=""
	I1213 08:48:05.773602   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:05.773904   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:05.778452   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:05.778491   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:05.778510   47783 retry.go:31] will retry after 3.257875901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:06.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:48:06.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:06.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:06.773209   47783 type.go:168] "Request Body" body=""
	I1213 08:48:06.773292   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:06.773624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:07.129286   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:07.188224   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:07.188280   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:07.188301   47783 retry.go:31] will retry after 1.575099921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:07.273578   47783 type.go:168] "Request Body" body=""
	I1213 08:48:07.273669   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:07.273988   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:07.274044   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:07.773778   47783 type.go:168] "Request Body" body=""
	I1213 08:48:07.773852   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:07.774188   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.273837   47783 type.go:168] "Request Body" body=""
	I1213 08:48:08.273926   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:08.274179   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.763743   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:08.773132   47783 type.go:168] "Request Body" body=""
	I1213 08:48:08.773211   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:08.773479   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.823924   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:08.827716   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:08.827745   47783 retry.go:31] will retry after 4.082199617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.037077   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:09.107584   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:09.107627   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.107646   47783 retry.go:31] will retry after 4.733469164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.273965   47783 type.go:168] "Request Body" body=""
	I1213 08:48:09.274042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:09.274370   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:09.274422   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:09.773216   47783 type.go:168] "Request Body" body=""
	I1213 08:48:09.773289   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:09.773540   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:10.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:48:10.273139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:10.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:10.773111   47783 type.go:168] "Request Body" body=""
	I1213 08:48:10.773192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:10.773561   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:11.272986   47783 type.go:168] "Request Body" body=""
	I1213 08:48:11.273056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:11.273307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:11.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:48:11.773117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:11.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:11.773571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:12.273226   47783 type.go:168] "Request Body" body=""
	I1213 08:48:12.273318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:12.273600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:12.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:48:12.773101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:12.773360   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:12.910787   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:12.972202   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:12.972251   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:12.972270   47783 retry.go:31] will retry after 8.911795338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:13.273667   47783 type.go:168] "Request Body" body=""
	I1213 08:48:13.273742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:13.274062   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:13.773915   47783 type.go:168] "Request Body" body=""
	I1213 08:48:13.773987   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:13.774307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:13.774364   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:13.841699   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:13.900246   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:13.900294   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:13.900313   47783 retry.go:31] will retry after 6.419298699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:14.273688   47783 type.go:168] "Request Body" body=""
	I1213 08:48:14.273763   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:14.274022   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:14.773814   47783 type.go:168] "Request Body" body=""
	I1213 08:48:14.773891   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:14.774197   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:15.273923   47783 type.go:168] "Request Body" body=""
	I1213 08:48:15.273993   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:15.274354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:15.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:48:15.773092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:15.773436   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:16.273052   47783 type.go:168] "Request Body" body=""
	I1213 08:48:16.273127   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:16.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:16.273499   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:16.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:48:16.773162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:16.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:17.272982   47783 type.go:168] "Request Body" body=""
	I1213 08:48:17.273050   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:17.273294   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:17.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:48:17.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:17.773414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:18.273126   47783 type.go:168] "Request Body" body=""
	I1213 08:48:18.273210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:18.273554   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:18.273611   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:18.773038   47783 type.go:168] "Request Body" body=""
	I1213 08:48:18.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:18.773365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:19.273077   47783 type.go:168] "Request Body" body=""
	I1213 08:48:19.273151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:19.273502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:19.773409   47783 type.go:168] "Request Body" body=""
	I1213 08:48:19.773482   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:19.773764   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:20.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:48:20.273167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:20.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:20.320652   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:20.382818   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:20.382863   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:20.382884   47783 retry.go:31] will retry after 5.774410243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:20.773290   47783 type.go:168] "Request Body" body=""
	I1213 08:48:20.773364   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:20.773699   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:20.773754   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:21.273419   47783 type.go:168] "Request Body" body=""
	I1213 08:48:21.273508   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:21.273838   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:21.773521   47783 type.go:168] "Request Body" body=""
	I1213 08:48:21.773588   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:21.773835   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:21.885194   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:21.947231   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:21.947284   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:21.947318   47783 retry.go:31] will retry after 10.220008645s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:22.273767   47783 type.go:168] "Request Body" body=""
	I1213 08:48:22.273840   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:22.274159   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:22.773949   47783 type.go:168] "Request Body" body=""
	I1213 08:48:22.774022   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:22.774282   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:22.774333   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:23.273005   47783 type.go:168] "Request Body" body=""
	I1213 08:48:23.273085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:23.273357   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:23.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:48:23.773140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:23.773454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:24.273095   47783 type.go:168] "Request Body" body=""
	I1213 08:48:24.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:24.273534   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:24.773021   47783 type.go:168] "Request Body" body=""
	I1213 08:48:24.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:24.773394   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:25.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:48:25.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:25.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:25.273549   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:25.773403   47783 type.go:168] "Request Body" body=""
	I1213 08:48:25.773494   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:25.773798   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:26.158458   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:26.211883   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:26.215285   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:26.215316   47783 retry.go:31] will retry after 15.443420543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:26.273497   47783 type.go:168] "Request Body" body=""
	I1213 08:48:26.273568   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:26.273871   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:26.773647   47783 type.go:168] "Request Body" body=""
	I1213 08:48:26.773724   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:26.774089   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:27.273764   47783 type.go:168] "Request Body" body=""
	I1213 08:48:27.273848   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:27.274199   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:27.274258   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:27.773967   47783 type.go:168] "Request Body" body=""
	I1213 08:48:27.774040   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:27.774313   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:28.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:48:28.273130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:28.273472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:28.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:48:28.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:28.773511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:29.272971   47783 type.go:168] "Request Body" body=""
	I1213 08:48:29.273039   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:29.273299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:29.773229   47783 type.go:168] "Request Body" body=""
	I1213 08:48:29.773303   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:29.773634   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:29.773690   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:30.273344   47783 type.go:168] "Request Body" body=""
	I1213 08:48:30.273424   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:30.273761   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:30.773498   47783 type.go:168] "Request Body" body=""
	I1213 08:48:30.773573   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:30.773910   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:31.273704   47783 type.go:168] "Request Body" body=""
	I1213 08:48:31.273780   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:31.274114   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:31.773932   47783 type.go:168] "Request Body" body=""
	I1213 08:48:31.774003   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:31.774336   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:31.774389   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:32.167590   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:32.226722   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:32.226762   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:32.226781   47783 retry.go:31] will retry after 8.254164246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:32.273897   47783 type.go:168] "Request Body" body=""
	I1213 08:48:32.273972   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:32.274230   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:32.772997   47783 type.go:168] "Request Body" body=""
	I1213 08:48:32.773100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:32.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:33.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:48:33.273177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:33.273513   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:33.773160   47783 type.go:168] "Request Body" body=""
	I1213 08:48:33.773250   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:33.773621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:34.273105   47783 type.go:168] "Request Body" body=""
	I1213 08:48:34.273213   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:34.273537   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:34.273589   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:34.773220   47783 type.go:168] "Request Body" body=""
	I1213 08:48:34.773295   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:34.773625   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:35.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:48:35.273116   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:35.273367   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:35.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:48:35.773150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:35.773476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:36.273986   47783 type.go:168] "Request Body" body=""
	I1213 08:48:36.274079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:36.274424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:36.274479   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:36.773035   47783 type.go:168] "Request Body" body=""
	I1213 08:48:36.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:36.773358   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:37.273041   47783 type.go:168] "Request Body" body=""
	I1213 08:48:37.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:37.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:37.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:48:37.773130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:37.773446   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:38.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:48:38.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:38.273423   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:38.773129   47783 type.go:168] "Request Body" body=""
	I1213 08:48:38.773205   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:38.773535   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:38.773618   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:39.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:48:39.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:39.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:39.773505   47783 type.go:168] "Request Body" body=""
	I1213 08:48:39.773593   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:39.773873   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:40.273652   47783 type.go:168] "Request Body" body=""
	I1213 08:48:40.273731   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:40.274095   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:40.481720   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:40.548346   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:40.548381   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:40.548399   47783 retry.go:31] will retry after 23.072803829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:40.773873   47783 type.go:168] "Request Body" body=""
	I1213 08:48:40.773944   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:40.774217   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:40.774266   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:41.273996   47783 type.go:168] "Request Body" body=""
	I1213 08:48:41.274066   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:41.274319   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:41.658979   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:41.720805   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:41.720849   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:41.720869   47783 retry.go:31] will retry after 14.236359641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:41.774005   47783 type.go:168] "Request Body" body=""
	I1213 08:48:41.774085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:41.774430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:42.273146   47783 type.go:168] "Request Body" body=""
	I1213 08:48:42.273232   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:42.273601   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:42.773152   47783 type.go:168] "Request Body" body=""
	I1213 08:48:42.773219   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:42.773482   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:43.273054   47783 type.go:168] "Request Body" body=""
	I1213 08:48:43.273159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:43.273484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:43.273541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:43.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:48:43.773275   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:43.773578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:44.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:48:44.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:44.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:44.773108   47783 type.go:168] "Request Body" body=""
	I1213 08:48:44.773180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:44.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:45.273251   47783 type.go:168] "Request Body" body=""
	I1213 08:48:45.273329   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:45.273651   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:45.273709   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:45.773677   47783 type.go:168] "Request Body" body=""
	I1213 08:48:45.773758   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:45.774018   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:46.273828   47783 type.go:168] "Request Body" body=""
	I1213 08:48:46.273897   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:46.274229   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:46.772972   47783 type.go:168] "Request Body" body=""
	I1213 08:48:46.773043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:46.773362   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:47.273045   47783 type.go:168] "Request Body" body=""
	I1213 08:48:47.273126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:47.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:47.773139   47783 type.go:168] "Request Body" body=""
	I1213 08:48:47.773219   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:47.773579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:47.773642   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:48.273287   47783 type.go:168] "Request Body" body=""
	I1213 08:48:48.273371   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:48.273677   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:48.773341   47783 type.go:168] "Request Body" body=""
	I1213 08:48:48.773416   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:48.773680   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:49.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:48:49.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:49.273506   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:49.773247   47783 type.go:168] "Request Body" body=""
	I1213 08:48:49.773323   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:49.773653   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:49.773705   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:50.273353   47783 type.go:168] "Request Body" body=""
	I1213 08:48:50.273427   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:50.273741   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:50.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:48:50.773764   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:50.774083   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:51.273888   47783 type.go:168] "Request Body" body=""
	I1213 08:48:51.273963   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:51.274309   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:51.773836   47783 type.go:168] "Request Body" body=""
	I1213 08:48:51.773911   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:51.774215   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:51.774264   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:52.273985   47783 type.go:168] "Request Body" body=""
	I1213 08:48:52.274074   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:52.274430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:52.773081   47783 type.go:168] "Request Body" body=""
	I1213 08:48:52.773168   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:52.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:53.273059   47783 type.go:168] "Request Body" body=""
	I1213 08:48:53.273129   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:53.273441   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:53.773125   47783 type.go:168] "Request Body" body=""
	I1213 08:48:53.773200   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:53.773519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:54.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:48:54.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:54.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:54.273482   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:54.773027   47783 type.go:168] "Request Body" body=""
	I1213 08:48:54.773104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:54.773358   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.273041   47783 type.go:168] "Request Body" body=""
	I1213 08:48:55.273121   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:55.273415   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.773058   47783 type.go:168] "Request Body" body=""
	I1213 08:48:55.773138   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:55.773419   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.957869   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:56.020865   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:56.020923   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:56.020946   47783 retry.go:31] will retry after 43.666748427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:56.273019   47783 type.go:168] "Request Body" body=""
	I1213 08:48:56.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:56.273421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:56.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:48:56.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:56.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:56.773548   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:57.273202   47783 type.go:168] "Request Body" body=""
	I1213 08:48:57.273273   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:57.273598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:57.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:48:57.773100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:57.773380   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:58.273085   47783 type.go:168] "Request Body" body=""
	I1213 08:48:58.273180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:58.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:58.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:48:58.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:58.773607   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:58.773665   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:59.273927   47783 type.go:168] "Request Body" body=""
	I1213 08:48:59.273999   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:59.274264   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:59.773208   47783 type.go:168] "Request Body" body=""
	I1213 08:48:59.773283   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:59.773635   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:00.273263   47783 type.go:168] "Request Body" body=""
	I1213 08:49:00.273369   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:00.273817   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:00.773836   47783 type.go:168] "Request Body" body=""
	I1213 08:49:00.773908   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:00.774222   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:00.774277   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:01.274021   47783 type.go:168] "Request Body" body=""
	I1213 08:49:01.274100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:01.274424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:01.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:49:01.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:01.773375   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:02.272965   47783 type.go:168] "Request Body" body=""
	I1213 08:49:02.273041   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:02.273365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:02.773094   47783 type.go:168] "Request Body" body=""
	I1213 08:49:02.773195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:02.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:03.273064   47783 type.go:168] "Request Body" body=""
	I1213 08:49:03.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:03.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:03.273512   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:03.622173   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:49:03.678608   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:03.682133   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:49:03.682162   47783 retry.go:31] will retry after 22.66884586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:49:03.773432   47783 type.go:168] "Request Body" body=""
	I1213 08:49:03.773502   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:03.773795   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:04.273439   47783 type.go:168] "Request Body" body=""
	I1213 08:49:04.273517   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:04.273868   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:04.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:49:04.773210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:04.773461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:05.273151   47783 type.go:168] "Request Body" body=""
	I1213 08:49:05.273221   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:05.273546   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:05.273657   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:05.773252   47783 type.go:168] "Request Body" body=""
	I1213 08:49:05.773325   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:05.773682   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:06.273974   47783 type.go:168] "Request Body" body=""
	I1213 08:49:06.274043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:06.274354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:06.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:49:06.773140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:06.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:07.273190   47783 type.go:168] "Request Body" body=""
	I1213 08:49:07.273272   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:07.273599   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:07.773017   47783 type.go:168] "Request Body" body=""
	I1213 08:49:07.773084   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:07.773327   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:07.773371   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:08.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:49:08.273139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:08.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:08.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:49:08.773269   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:08.773582   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:09.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:49:09.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:09.273483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:09.773186   47783 type.go:168] "Request Body" body=""
	I1213 08:49:09.773263   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:09.773596   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:09.773650   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:10.273086   47783 type.go:168] "Request Body" body=""
	I1213 08:49:10.273160   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:10.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:10.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:49:10.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:10.773368   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:11.273030   47783 type.go:168] "Request Body" body=""
	I1213 08:49:11.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:11.273462   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:11.773903   47783 type.go:168] "Request Body" body=""
	I1213 08:49:11.773985   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:11.774302   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:11.774358   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:12.273000   47783 type.go:168] "Request Body" body=""
	I1213 08:49:12.273072   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:12.273375   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:12.773077   47783 type.go:168] "Request Body" body=""
	I1213 08:49:12.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:12.773483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:13.273179   47783 type.go:168] "Request Body" body=""
	I1213 08:49:13.273252   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:13.273585   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:13.773267   47783 type.go:168] "Request Body" body=""
	I1213 08:49:13.773337   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:13.773602   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:14.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:49:14.273179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:14.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:14.273573   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:14.773254   47783 type.go:168] "Request Body" body=""
	I1213 08:49:14.773326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:14.773613   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:15.272991   47783 type.go:168] "Request Body" body=""
	I1213 08:49:15.273071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:15.273379   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:15.773114   47783 type.go:168] "Request Body" body=""
	I1213 08:49:15.773190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:15.773537   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:16.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:49:16.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:16.273493   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:16.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:49:16.773105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:16.773399   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:16.773452   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:17.272977   47783 type.go:168] "Request Body" body=""
	I1213 08:49:17.273056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:17.273386   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:17.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:49:17.773156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:17.773469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:18.273029   47783 type.go:168] "Request Body" body=""
	I1213 08:49:18.273098   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:18.273417   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:18.773104   47783 type.go:168] "Request Body" body=""
	I1213 08:49:18.773178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:18.773501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:18.773552   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:19.273076   47783 type.go:168] "Request Body" body=""
	I1213 08:49:19.273154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:19.273462   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:19.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:49:19.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:19.773602   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:20.273287   47783 type.go:168] "Request Body" body=""
	I1213 08:49:20.273359   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:20.273692   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:20.773504   47783 type.go:168] "Request Body" body=""
	I1213 08:49:20.773579   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:20.773879   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:20.773925   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:21.273640   47783 type.go:168] "Request Body" body=""
	I1213 08:49:21.273706   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:21.273955   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:21.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:49:21.773767   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:21.774073   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:22.273883   47783 type.go:168] "Request Body" body=""
	I1213 08:49:22.273954   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:22.274296   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:22.772994   47783 type.go:168] "Request Body" body=""
	I1213 08:49:22.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:22.773314   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:23.272983   47783 type.go:168] "Request Body" body=""
	I1213 08:49:23.273054   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:23.273387   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:23.273444   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:23.773118   47783 type.go:168] "Request Body" body=""
	I1213 08:49:23.773191   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:23.773501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:24.273058   47783 type.go:168] "Request Body" body=""
	I1213 08:49:24.273141   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:24.273459   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:24.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:49:24.773148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:24.773494   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:25.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:49:25.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:25.273476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:25.273529   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:25.773465   47783 type.go:168] "Request Body" body=""
	I1213 08:49:25.773532   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:25.773794   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:26.273591   47783 type.go:168] "Request Body" body=""
	I1213 08:49:26.273658   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:26.273978   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:26.351410   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:49:26.407043   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:26.410457   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:26.410551   47783 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:49:26.773965   47783 type.go:168] "Request Body" body=""
	I1213 08:49:26.774065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:26.774374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:27.272953   47783 type.go:168] "Request Body" body=""
	I1213 08:49:27.273025   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:27.273280   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:27.774019   47783 type.go:168] "Request Body" body=""
	I1213 08:49:27.774142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:27.774488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:27.774540   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:28.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:49:28.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:28.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:28.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:49:28.773153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:28.773437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:29.273108   47783 type.go:168] "Request Body" body=""
	I1213 08:49:29.273179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:29.273507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:29.773273   47783 type.go:168] "Request Body" body=""
	I1213 08:49:29.773361   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:29.773684   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:30.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:49:30.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:30.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:30.273487   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:30.773081   47783 type.go:168] "Request Body" body=""
	I1213 08:49:30.773180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:30.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:31.273215   47783 type.go:168] "Request Body" body=""
	I1213 08:49:31.273285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:31.273560   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:31.773891   47783 type.go:168] "Request Body" body=""
	I1213 08:49:31.773957   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:31.774211   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:32.274000   47783 type.go:168] "Request Body" body=""
	I1213 08:49:32.274087   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:32.274384   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:32.274426   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:32.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:49:32.773174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:32.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:33.273961   47783 type.go:168] "Request Body" body=""
	I1213 08:49:33.274039   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:33.274308   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:33.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:49:33.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:33.773455   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:34.273150   47783 type.go:168] "Request Body" body=""
	I1213 08:49:34.273226   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:34.273548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:34.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:49:34.773097   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:34.773360   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:34.773410   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:35.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:49:35.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:35.273467   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:35.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:49:35.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:35.773443   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:36.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:49:36.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:36.273468   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:36.773159   47783 type.go:168] "Request Body" body=""
	I1213 08:49:36.773245   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:36.773626   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:36.773678   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:37.273354   47783 type.go:168] "Request Body" body=""
	I1213 08:49:37.273429   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:37.273763   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:37.773002   47783 type.go:168] "Request Body" body=""
	I1213 08:49:37.773079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:37.773333   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:38.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:49:38.273122   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:38.273453   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:38.773090   47783 type.go:168] "Request Body" body=""
	I1213 08:49:38.773163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:38.773490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:39.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:49:39.273155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:39.273480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:39.273532   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:39.687941   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:49:39.742570   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:39.746037   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:39.746134   47783 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:49:39.749234   47783 out.go:179] * Enabled addons: 
	I1213 08:49:39.751225   47783 addons.go:530] duration metric: took 1m39.988589749s for enable addons: enabled=[]
	I1213 08:49:39.773343   47783 type.go:168] "Request Body" body=""
	I1213 08:49:39.773416   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:39.773726   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:40.273095   47783 type.go:168] "Request Body" body=""
	I1213 08:49:40.273171   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:40.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:40.773067   47783 type.go:168] "Request Body" body=""
	I1213 08:49:40.773147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:40.773426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:41.273150   47783 type.go:168] "Request Body" body=""
	I1213 08:49:41.273226   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:41.273552   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:41.273610   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:41.773119   47783 type.go:168] "Request Body" body=""
	I1213 08:49:41.773205   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:41.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:42.278526   47783 type.go:168] "Request Body" body=""
	I1213 08:49:42.278597   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:42.279069   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:42.773238   47783 type.go:168] "Request Body" body=""
	I1213 08:49:42.773329   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:42.773690   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:43.273405   47783 type.go:168] "Request Body" body=""
	I1213 08:49:43.273484   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:43.273806   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:43.273861   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:43.773568   47783 type.go:168] "Request Body" body=""
	I1213 08:49:43.773638   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:43.773886   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:44.273754   47783 type.go:168] "Request Body" body=""
	I1213 08:49:44.273826   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:44.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:44.773918   47783 type.go:168] "Request Body" body=""
	I1213 08:49:44.773997   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:44.774334   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:45.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:49:45.273137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:45.273511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:45.773242   47783 type.go:168] "Request Body" body=""
	I1213 08:49:45.773316   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:45.773611   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:45.773658   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:46.273330   47783 type.go:168] "Request Body" body=""
	I1213 08:49:46.273412   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:46.273751   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:46.773437   47783 type.go:168] "Request Body" body=""
	I1213 08:49:46.773504   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:46.773768   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:47.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:49:47.273144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:47.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:47.773097   47783 type.go:168] "Request Body" body=""
	I1213 08:49:47.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:47.773505   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:48.273049   47783 type.go:168] "Request Body" body=""
	I1213 08:49:48.273125   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:48.273438   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:48.273501   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:48.773147   47783 type.go:168] "Request Body" body=""
	I1213 08:49:48.773223   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:48.773560   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:49.273284   47783 type.go:168] "Request Body" body=""
	I1213 08:49:49.273357   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:49.273664   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:49.773223   47783 type.go:168] "Request Body" body=""
	I1213 08:49:49.773311   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:49.773649   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:50.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:49:50.273201   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:50.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:50.273544   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:50.773094   47783 type.go:168] "Request Body" body=""
	I1213 08:49:50.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:50.773510   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:51.273186   47783 type.go:168] "Request Body" body=""
	I1213 08:49:51.273280   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:51.273547   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:51.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:49:51.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:51.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:52.273068   47783 type.go:168] "Request Body" body=""
	I1213 08:49:52.273164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:52.273473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:52.773028   47783 type.go:168] "Request Body" body=""
	I1213 08:49:52.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:52.773409   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:52.773465   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:53.273131   47783 type.go:168] "Request Body" body=""
	I1213 08:49:53.273206   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:53.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:53.773165   47783 type.go:168] "Request Body" body=""
	I1213 08:49:53.773244   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:53.773580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:54.273021   47783 type.go:168] "Request Body" body=""
	I1213 08:49:54.273089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:54.273345   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:54.773054   47783 type.go:168] "Request Body" body=""
	I1213 08:49:54.773137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:54.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:54.773578   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:55.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:49:55.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:55.273699   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:55.773529   47783 type.go:168] "Request Body" body=""
	I1213 08:49:55.773602   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:55.773850   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:56.273641   47783 type.go:168] "Request Body" body=""
	I1213 08:49:56.273732   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:56.274042   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:56.773818   47783 type.go:168] "Request Body" body=""
	I1213 08:49:56.773897   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:56.774220   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:56.774270   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:57.273975   47783 type.go:168] "Request Body" body=""
	I1213 08:49:57.274045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:57.274295   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:57.772996   47783 type.go:168] "Request Body" body=""
	I1213 08:49:57.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:57.773405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:58.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:49:58.273173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:58.273494   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:58.773156   47783 type.go:168] "Request Body" body=""
	I1213 08:49:58.773264   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:58.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:59.273323   47783 type.go:168] "Request Body" body=""
	I1213 08:49:59.273404   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:59.273727   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:59.273784   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:59.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:49:59.773775   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:59.774108   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:00.297017   47783 type.go:168] "Request Body" body=""
	I1213 08:50:00.297166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:00.297535   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:00.773617   47783 type.go:168] "Request Body" body=""
	I1213 08:50:00.773751   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:00.774118   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:01.273864   47783 type.go:168] "Request Body" body=""
	I1213 08:50:01.273935   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:01.274197   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:01.274238   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:01.773963   47783 type.go:168] "Request Body" body=""
	I1213 08:50:01.774042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:01.774389   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:02.273108   47783 type.go:168] "Request Body" body=""
	I1213 08:50:02.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:02.273568   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:02.773279   47783 type.go:168] "Request Body" body=""
	I1213 08:50:02.773363   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:02.773686   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:03.273398   47783 type.go:168] "Request Body" body=""
	I1213 08:50:03.273474   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:03.273787   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:03.773099   47783 type.go:168] "Request Body" body=""
	I1213 08:50:03.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:03.773516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:03.773571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:04.273225   47783 type.go:168] "Request Body" body=""
	I1213 08:50:04.273313   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:04.273579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:04.773096   47783 type.go:168] "Request Body" body=""
	I1213 08:50:04.773170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:04.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:05.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:50:05.273153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:05.273466   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:05.773045   47783 type.go:168] "Request Body" body=""
	I1213 08:50:05.773131   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:05.773429   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:06.273084   47783 type.go:168] "Request Body" body=""
	I1213 08:50:06.273161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:06.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:06.273553   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:06.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:50:06.773112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:06.773416   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:07.272971   47783 type.go:168] "Request Body" body=""
	I1213 08:50:07.273045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:07.273298   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:07.773048   47783 type.go:168] "Request Body" body=""
	I1213 08:50:07.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:07.773427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:08.273096   47783 type.go:168] "Request Body" body=""
	I1213 08:50:08.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:08.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:08.773055   47783 type.go:168] "Request Body" body=""
	I1213 08:50:08.773142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:08.773458   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:08.773523   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:09.273174   47783 type.go:168] "Request Body" body=""
	I1213 08:50:09.273248   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:09.273572   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:09.773553   47783 type.go:168] "Request Body" body=""
	I1213 08:50:09.773632   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:09.773992   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:10.273723   47783 type.go:168] "Request Body" body=""
	I1213 08:50:10.273814   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:10.274202   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:10.773083   47783 type.go:168] "Request Body" body=""
	I1213 08:50:10.773168   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:10.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:10.773551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:11.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:50:11.273271   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:11.273600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:11.772991   47783 type.go:168] "Request Body" body=""
	I1213 08:50:11.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:11.773355   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:12.273063   47783 type.go:168] "Request Body" body=""
	I1213 08:50:12.273145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:12.273477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:12.773181   47783 type.go:168] "Request Body" body=""
	I1213 08:50:12.773261   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:12.773608   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:12.773666   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:13.273258   47783 type.go:168] "Request Body" body=""
	I1213 08:50:13.273334   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:13.273612   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:13.773147   47783 type.go:168] "Request Body" body=""
	I1213 08:50:13.773220   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:13.773548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:14.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:50:14.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:14.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:14.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:50:14.773121   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:14.773402   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:15.273165   47783 type.go:168] "Request Body" body=""
	I1213 08:50:15.273259   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:15.273603   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:15.273662   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:15.773104   47783 type.go:168] "Request Body" body=""
	I1213 08:50:15.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:15.773492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:16.273005   47783 type.go:168] "Request Body" body=""
	I1213 08:50:16.273080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:16.273324   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:16.773018   47783 type.go:168] "Request Body" body=""
	I1213 08:50:16.773092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:16.773427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:17.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:17.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:17.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:17.773161   47783 type.go:168] "Request Body" body=""
	I1213 08:50:17.773232   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:17.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:17.773616   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:18.273100   47783 type.go:168] "Request Body" body=""
	I1213 08:50:18.273173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:18.273458   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:18.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:50:18.773177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:18.773525   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:19.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:50:19.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:19.273451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:19.773113   47783 type.go:168] "Request Body" body=""
	I1213 08:50:19.773187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:19.773533   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:20.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:50:20.273111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:20.273432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:20.273484   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:20.773059   47783 type.go:168] "Request Body" body=""
	I1213 08:50:20.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:20.773472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:21.273123   47783 type.go:168] "Request Body" body=""
	I1213 08:50:21.273195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:21.273492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:21.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:50:21.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:21.773514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:22.273055   47783 type.go:168] "Request Body" body=""
	I1213 08:50:22.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:22.273427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:22.773171   47783 type.go:168] "Request Body" body=""
	I1213 08:50:22.773255   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:22.773587   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:22.773638   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:23.273116   47783 type.go:168] "Request Body" body=""
	I1213 08:50:23.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:23.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:23.773176   47783 type.go:168] "Request Body" body=""
	I1213 08:50:23.773242   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:23.773565   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:24.273113   47783 type.go:168] "Request Body" body=""
	I1213 08:50:24.273208   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:24.273526   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:24.773266   47783 type.go:168] "Request Body" body=""
	I1213 08:50:24.773335   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:24.773645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:24.773691   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:25.273185   47783 type.go:168] "Request Body" body=""
	I1213 08:50:25.273256   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:25.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:25.773486   47783 type.go:168] "Request Body" body=""
	I1213 08:50:25.773580   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:25.773863   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:26.273638   47783 type.go:168] "Request Body" body=""
	I1213 08:50:26.273722   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:26.274046   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:26.773772   47783 type.go:168] "Request Body" body=""
	I1213 08:50:26.773849   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:26.774110   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:26.774158   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:27.273949   47783 type.go:168] "Request Body" body=""
	I1213 08:50:27.274035   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:27.274426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:27.773079   47783 type.go:168] "Request Body" body=""
	I1213 08:50:27.773157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:27.773492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:28.273060   47783 type.go:168] "Request Body" body=""
	I1213 08:50:28.273130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:28.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:28.773138   47783 type.go:168] "Request Body" body=""
	I1213 08:50:28.773213   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:28.773534   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:29.273243   47783 type.go:168] "Request Body" body=""
	I1213 08:50:29.273332   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:29.273667   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:29.273721   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:29.773419   47783 type.go:168] "Request Body" body=""
	I1213 08:50:29.773492   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:29.773757   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:30.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:50:30.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:30.273528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:30.773268   47783 type.go:168] "Request Body" body=""
	I1213 08:50:30.773342   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:30.773681   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:31.273053   47783 type.go:168] "Request Body" body=""
	I1213 08:50:31.273131   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:31.273450   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:31.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:50:31.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:31.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:31.773554   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:32.273178   47783 type.go:168] "Request Body" body=""
	I1213 08:50:32.273254   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:32.273550   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:32.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:50:32.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:32.773420   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:33.273126   47783 type.go:168] "Request Body" body=""
	I1213 08:50:33.273197   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:33.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:33.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:50:33.773311   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:33.773638   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:33.773693   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:34.273120   47783 type.go:168] "Request Body" body=""
	I1213 08:50:34.273192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:34.273450   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:34.773092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:34.773162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:34.773483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:35.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:50:35.273285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:35.273659   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:35.773467   47783 type.go:168] "Request Body" body=""
	I1213 08:50:35.773539   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:35.773795   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:35.773847   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:36.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:36.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:36.273501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:36.773198   47783 type.go:168] "Request Body" body=""
	I1213 08:50:36.773272   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:36.773631   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:37.273322   47783 type.go:168] "Request Body" body=""
	I1213 08:50:37.273394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:37.273649   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:37.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:50:37.773139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:37.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:38.273158   47783 type.go:168] "Request Body" body=""
	I1213 08:50:38.273235   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:38.273563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:38.273617   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:38.773952   47783 type.go:168] "Request Body" body=""
	I1213 08:50:38.774033   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:38.774277   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:39.272962   47783 type.go:168] "Request Body" body=""
	I1213 08:50:39.273042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:39.273376   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:39.773154   47783 type.go:168] "Request Body" body=""
	I1213 08:50:39.773228   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:39.773559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:40.273228   47783 type.go:168] "Request Body" body=""
	I1213 08:50:40.273301   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:40.273580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:40.773572   47783 type.go:168] "Request Body" body=""
	I1213 08:50:40.773643   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:40.773972   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:40.774022   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:41.273745   47783 type.go:168] "Request Body" body=""
	I1213 08:50:41.273822   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:41.274145   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:41.773824   47783 type.go:168] "Request Body" body=""
	I1213 08:50:41.773902   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:41.774153   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:42.273992   47783 type.go:168] "Request Body" body=""
	I1213 08:50:42.274071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:42.274419   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:42.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:50:42.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:42.773457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:43.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:50:43.273175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:43.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:43.273477   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:43.773102   47783 type.go:168] "Request Body" body=""
	I1213 08:50:43.773177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:43.773521   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:44.273220   47783 type.go:168] "Request Body" body=""
	I1213 08:50:44.273315   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:44.273660   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:44.773011   47783 type.go:168] "Request Body" body=""
	I1213 08:50:44.773097   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:44.773359   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:45.273045   47783 type.go:168] "Request Body" body=""
	I1213 08:50:45.273133   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:45.273474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:45.273530   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:45.773107   47783 type.go:168] "Request Body" body=""
	I1213 08:50:45.773184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:45.773544   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:46.273227   47783 type.go:168] "Request Body" body=""
	I1213 08:50:46.273297   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:46.273559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:46.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:50:46.773135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:46.773469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:47.273151   47783 type.go:168] "Request Body" body=""
	I1213 08:50:47.273234   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:47.273574   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:47.273628   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:47.773031   47783 type.go:168] "Request Body" body=""
	I1213 08:50:47.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:47.773365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:48.273067   47783 type.go:168] "Request Body" body=""
	I1213 08:50:48.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:48.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:48.773210   47783 type.go:168] "Request Body" body=""
	I1213 08:50:48.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:48.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:49.272997   47783 type.go:168] "Request Body" body=""
	I1213 08:50:49.273065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:49.273322   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:49.773187   47783 type.go:168] "Request Body" body=""
	I1213 08:50:49.773256   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:49.773595   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:49.773649   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:50.273317   47783 type.go:168] "Request Body" body=""
	I1213 08:50:50.273391   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:50.273716   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:50.773448   47783 type.go:168] "Request Body" body=""
	I1213 08:50:50.773521   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:50.773785   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:51.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:50:51.273153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:51.273469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:51.773185   47783 type.go:168] "Request Body" body=""
	I1213 08:50:51.773266   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:51.773606   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:52.273034   47783 type.go:168] "Request Body" body=""
	I1213 08:50:52.273104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:52.273399   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:52.273451   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:52.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:50:52.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:52.773490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:53.273950   47783 type.go:168] "Request Body" body=""
	I1213 08:50:53.274028   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:53.274337   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:53.772992   47783 type.go:168] "Request Body" body=""
	I1213 08:50:53.773065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:53.773378   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:54.272967   47783 type.go:168] "Request Body" body=""
	I1213 08:50:54.273044   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:54.273396   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:54.773129   47783 type.go:168] "Request Body" body=""
	I1213 08:50:54.773203   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:54.773528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:54.773588   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:55.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:50:55.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:55.273415   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:55.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:50:55.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:55.773463   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:56.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:50:56.273156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:56.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:56.772978   47783 type.go:168] "Request Body" body=""
	I1213 08:50:56.773045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:56.773290   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:57.272974   47783 type.go:168] "Request Body" body=""
	I1213 08:50:57.273048   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:57.273374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:57.273428   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:57.773002   47783 type.go:168] "Request Body" body=""
	I1213 08:50:57.773088   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:57.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:58.273022   47783 type.go:168] "Request Body" body=""
	I1213 08:50:58.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:58.273391   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:58.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:50:58.773196   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:58.773519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:59.273246   47783 type.go:168] "Request Body" body=""
	I1213 08:50:59.273324   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:59.273653   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:59.273705   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:59.773429   47783 type.go:168] "Request Body" body=""
	I1213 08:50:59.773497   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:59.773750   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:00.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:51:00.273284   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:00.273609   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:00.773677   47783 type.go:168] "Request Body" body=""
	I1213 08:51:00.773767   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:00.774124   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:01.273906   47783 type.go:168] "Request Body" body=""
	I1213 08:51:01.273981   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:01.274248   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:01.274298   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:01.772982   47783 type.go:168] "Request Body" body=""
	I1213 08:51:01.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:01.773396   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:02.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:51:02.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:02.273510   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:02.773051   47783 type.go:168] "Request Body" body=""
	I1213 08:51:02.773122   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:02.773441   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:03.273077   47783 type.go:168] "Request Body" body=""
	I1213 08:51:03.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:03.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:03.773197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:03.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:03.773610   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:03.773666   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:04.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:51:04.273145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:04.273469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:04.773150   47783 type.go:168] "Request Body" body=""
	I1213 08:51:04.773225   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:04.773542   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:05.273267   47783 type.go:168] "Request Body" body=""
	I1213 08:51:05.273342   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:05.273694   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:05.773534   47783 type.go:168] "Request Body" body=""
	I1213 08:51:05.773600   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:05.773861   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:05.773901   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:06.273627   47783 type.go:168] "Request Body" body=""
	I1213 08:51:06.273698   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:06.273995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:06.773786   47783 type.go:168] "Request Body" body=""
	I1213 08:51:06.773858   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:06.774165   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:07.273877   47783 type.go:168] "Request Body" body=""
	I1213 08:51:07.273959   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:07.274221   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:07.774001   47783 type.go:168] "Request Body" body=""
	I1213 08:51:07.774080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:07.774408   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:07.774461   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:08.273096   47783 type.go:168] "Request Body" body=""
	I1213 08:51:08.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:08.273511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:08.773065   47783 type.go:168] "Request Body" body=""
	I1213 08:51:08.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:08.773665   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:09.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:51:09.273148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:09.273474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:09.773196   47783 type.go:168] "Request Body" body=""
	I1213 08:51:09.773269   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:09.773600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:10.273256   47783 type.go:168] "Request Body" body=""
	I1213 08:51:10.273325   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:10.273572   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:10.273611   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:10.773469   47783 type.go:168] "Request Body" body=""
	I1213 08:51:10.773540   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:10.773888   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:11.273704   47783 type.go:168] "Request Body" body=""
	I1213 08:51:11.273777   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:11.274082   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:11.773860   47783 type.go:168] "Request Body" body=""
	I1213 08:51:11.773926   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:11.774166   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:12.273909   47783 type.go:168] "Request Body" body=""
	I1213 08:51:12.273991   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:12.274305   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:12.274363   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:12.773010   47783 type.go:168] "Request Body" body=""
	I1213 08:51:12.773086   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:12.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:13.272974   47783 type.go:168] "Request Body" body=""
	I1213 08:51:13.273048   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:13.273299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:13.774032   47783 type.go:168] "Request Body" body=""
	I1213 08:51:13.774103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:13.774412   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:14.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:51:14.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:14.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:14.773021   47783 type.go:168] "Request Body" body=""
	I1213 08:51:14.773090   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:14.773394   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:14.773446   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:15.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:51:15.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:15.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:15.773082   47783 type.go:168] "Request Body" body=""
	I1213 08:51:15.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:15.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:16.273178   47783 type.go:168] "Request Body" body=""
	I1213 08:51:16.273248   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:16.273492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:16.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:16.773179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:16.773512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:16.773564   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:17.273224   47783 type.go:168] "Request Body" body=""
	I1213 08:51:17.273307   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:17.273601   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:17.773254   47783 type.go:168] "Request Body" body=""
	I1213 08:51:17.773319   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:17.773610   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:18.273315   47783 type.go:168] "Request Body" body=""
	I1213 08:51:18.273399   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:18.273714   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:18.773100   47783 type.go:168] "Request Body" body=""
	I1213 08:51:18.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:18.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:18.773613   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:19.273057   47783 type.go:168] "Request Body" body=""
	I1213 08:51:19.273134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:19.273422   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:19.773326   47783 type.go:168] "Request Body" body=""
	I1213 08:51:19.773408   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:19.773817   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:20.273652   47783 type.go:168] "Request Body" body=""
	I1213 08:51:20.273726   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:20.274023   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:20.773908   47783 type.go:168] "Request Body" body=""
	I1213 08:51:20.773981   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:20.774242   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:20.774291   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:21.272987   47783 type.go:168] "Request Body" body=""
	I1213 08:51:21.273071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:21.273418   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:21.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:51:21.773224   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:21.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:22.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:51:22.273110   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:22.273385   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:22.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:22.773167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:22.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:23.273121   47783 type.go:168] "Request Body" body=""
	I1213 08:51:23.273201   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:23.273516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:23.273571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:23.773217   47783 type.go:168] "Request Body" body=""
	I1213 08:51:23.773286   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:23.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:24.273100   47783 type.go:168] "Request Body" body=""
	I1213 08:51:24.273174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:24.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:24.773255   47783 type.go:168] "Request Body" body=""
	I1213 08:51:24.773333   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:24.773667   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:25.273326   47783 type.go:168] "Request Body" body=""
	I1213 08:51:25.273394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:25.273641   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:25.273680   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:25.773681   47783 type.go:168] "Request Body" body=""
	I1213 08:51:25.773757   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:25.774066   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:26.273874   47783 type.go:168] "Request Body" body=""
	I1213 08:51:26.273947   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:26.274273   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:26.772979   47783 type.go:168] "Request Body" body=""
	I1213 08:51:26.773047   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:26.773307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:27.273109   47783 type.go:168] "Request Body" body=""
	I1213 08:51:27.273206   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:27.273597   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:27.773321   47783 type.go:168] "Request Body" body=""
	I1213 08:51:27.773394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:27.773719   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:27.773778   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:28.273019   47783 type.go:168] "Request Body" body=""
	I1213 08:51:28.273092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:28.273421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:28.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:51:28.773172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:28.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:29.273187   47783 type.go:168] "Request Body" body=""
	I1213 08:51:29.273261   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:29.273583   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:29.773457   47783 type.go:168] "Request Body" body=""
	I1213 08:51:29.773543   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:29.773812   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:29.773863   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:30.273575   47783 type.go:168] "Request Body" body=""
	I1213 08:51:30.273663   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:30.273957   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:30.773908   47783 type.go:168] "Request Body" body=""
	I1213 08:51:30.773991   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:30.774329   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:31.273977   47783 type.go:168] "Request Body" body=""
	I1213 08:51:31.274058   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:31.274309   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:31.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:51:31.773126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:31.773428   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:32.273147   47783 type.go:168] "Request Body" body=""
	I1213 08:51:32.273218   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:32.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:32.273546   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:32.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:51:32.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:32.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:33.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:51:33.273171   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:33.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:33.773083   47783 type.go:168] "Request Body" body=""
	I1213 08:51:33.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:33.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:34.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:51:34.273124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:34.273384   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:34.773086   47783 type.go:168] "Request Body" body=""
	I1213 08:51:34.773216   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:34.773545   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:34.773602   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:35.273264   47783 type.go:168] "Request Body" body=""
	I1213 08:51:35.273339   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:35.273673   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:35.773675   47783 type.go:168] "Request Body" body=""
	I1213 08:51:35.773742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:35.773995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:36.273812   47783 type.go:168] "Request Body" body=""
	I1213 08:51:36.273886   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:36.274208   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:36.773984   47783 type.go:168] "Request Body" body=""
	I1213 08:51:36.774056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:36.774351   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:36.774398   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:37.273032   47783 type.go:168] "Request Body" body=""
	I1213 08:51:37.273100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:37.273364   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:37.773071   47783 type.go:168] "Request Body" body=""
	I1213 08:51:37.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:37.773499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:38.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:51:38.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:38.273480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:38.773071   47783 type.go:168] "Request Body" body=""
	I1213 08:51:38.773150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:38.773445   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:39.273066   47783 type.go:168] "Request Body" body=""
	I1213 08:51:39.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:39.273476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:39.273529   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:39.773199   47783 type.go:168] "Request Body" body=""
	I1213 08:51:39.773278   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:39.773580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:40.273030   47783 type.go:168] "Request Body" body=""
	I1213 08:51:40.273100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:40.273392   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:40.773204   47783 type.go:168] "Request Body" body=""
	I1213 08:51:40.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:40.773647   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:41.273380   47783 type.go:168] "Request Body" body=""
	I1213 08:51:41.273453   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:41.273801   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:41.273854   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:41.773592   47783 type.go:168] "Request Body" body=""
	I1213 08:51:41.773662   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:41.773929   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:42.273710   47783 type.go:168] "Request Body" body=""
	I1213 08:51:42.273789   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:42.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:42.773753   47783 type.go:168] "Request Body" body=""
	I1213 08:51:42.773827   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:42.774140   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:43.273914   47783 type.go:168] "Request Body" body=""
	I1213 08:51:43.273992   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:43.274262   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:43.274314   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:43.774012   47783 type.go:168] "Request Body" body=""
	I1213 08:51:43.774089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:43.774435   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:44.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:51:44.273110   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:44.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:44.773038   47783 type.go:168] "Request Body" body=""
	I1213 08:51:44.773145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:44.773485   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:45.273088   47783 type.go:168] "Request Body" body=""
	I1213 08:51:45.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:45.273517   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:45.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:51:45.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:45.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:45.773548   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:46.273137   47783 type.go:168] "Request Body" body=""
	I1213 08:51:46.273200   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:46.273447   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:46.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:51:46.773198   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:46.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:47.273202   47783 type.go:168] "Request Body" body=""
	I1213 08:51:47.273291   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:47.273588   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:47.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:51:47.773134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:47.773395   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:48.273117   47783 type.go:168] "Request Body" body=""
	I1213 08:51:48.273197   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:48.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:48.273569   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:48.773253   47783 type.go:168] "Request Body" body=""
	I1213 08:51:48.773339   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:48.773688   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:49.273385   47783 type.go:168] "Request Body" body=""
	I1213 08:51:49.273462   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:49.273726   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:49.773610   47783 type.go:168] "Request Body" body=""
	I1213 08:51:49.773679   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:49.773995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:50.273790   47783 type.go:168] "Request Body" body=""
	I1213 08:51:50.273866   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:50.274187   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:50.274243   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:50.773061   47783 type.go:168] "Request Body" body=""
	I1213 08:51:50.773136   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:50.773466   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:51.273093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:51.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:51.273471   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:51.773060   47783 type.go:168] "Request Body" body=""
	I1213 08:51:51.773138   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:51.773445   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:52.273028   47783 type.go:168] "Request Body" body=""
	I1213 08:51:52.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:52.273335   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:52.773020   47783 type.go:168] "Request Body" body=""
	I1213 08:51:52.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:52.773428   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:52.773493   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:53.273197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:53.273277   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:53.273626   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:53.773306   47783 type.go:168] "Request Body" body=""
	I1213 08:51:53.773373   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:53.773624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:54.273103   47783 type.go:168] "Request Body" body=""
	I1213 08:51:54.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:54.273587   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:54.773074   47783 type.go:168] "Request Body" body=""
	I1213 08:51:54.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:54.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:54.773556   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:55.273197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:55.273270   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:55.273520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:55.773075   47783 type.go:168] "Request Body" body=""
	I1213 08:51:55.773149   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:55.773705   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:56.273383   47783 type.go:168] "Request Body" body=""
	I1213 08:51:56.273474   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:56.274426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:56.772963   47783 type.go:168] "Request Body" body=""
	I1213 08:51:56.773031   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:56.773288   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:57.272979   47783 type.go:168] "Request Body" body=""
	I1213 08:51:57.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:57.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:57.273481   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:57.773116   47783 type.go:168] "Request Body" body=""
	I1213 08:51:57.773193   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:57.773526   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:58.273042   47783 type.go:168] "Request Body" body=""
	I1213 08:51:58.273108   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:58.273456   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:58.773114   47783 type.go:168] "Request Body" body=""
	I1213 08:51:58.773188   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:58.773503   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:59.273098   47783 type.go:168] "Request Body" body=""
	I1213 08:51:59.273220   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:59.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:59.273585   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:59.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:51:59.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:59.773540   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:00.273530   47783 type.go:168] "Request Body" body=""
	I1213 08:52:00.273627   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:00.274021   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:00.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:52:00.773167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:00.773522   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:01.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:52:01.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:01.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:01.773057   47783 type.go:168] "Request Body" body=""
	I1213 08:52:01.773134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:01.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:01.773527   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:02.273229   47783 type.go:168] "Request Body" body=""
	I1213 08:52:02.273310   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:02.273637   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:02.773212   47783 type.go:168] "Request Body" body=""
	I1213 08:52:02.773280   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:02.773548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:03.273115   47783 type.go:168] "Request Body" body=""
	I1213 08:52:03.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:03.273565   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:03.773267   47783 type.go:168] "Request Body" body=""
	I1213 08:52:03.773352   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:03.773695   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:03.773772   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:04.273021   47783 type.go:168] "Request Body" body=""
	I1213 08:52:04.273090   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:04.273426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:04.773087   47783 type.go:168] "Request Body" body=""
	I1213 08:52:04.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:04.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:05.273162   47783 type.go:168] "Request Body" body=""
	I1213 08:52:05.273242   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:05.273562   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:05.773515   47783 type.go:168] "Request Body" body=""
	I1213 08:52:05.773585   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:05.773843   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:05.773884   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:06.273680   47783 type.go:168] "Request Body" body=""
	I1213 08:52:06.273755   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:06.274063   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:06.773854   47783 type.go:168] "Request Body" body=""
	I1213 08:52:06.773929   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:06.774259   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:07.274012   47783 type.go:168] "Request Body" body=""
	I1213 08:52:07.274080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:07.274341   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:07.773027   47783 type.go:168] "Request Body" body=""
	I1213 08:52:07.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:07.773425   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:08.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:52:08.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:08.273531   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:08.273588   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:08.773079   47783 type.go:168] "Request Body" body=""
	I1213 08:52:08.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:08.773463   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:09.273105   47783 type.go:168] "Request Body" body=""
	I1213 08:52:09.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:09.273541   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:09.773271   47783 type.go:168] "Request Body" body=""
	I1213 08:52:09.773343   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:09.773683   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:10.273235   47783 type.go:168] "Request Body" body=""
	I1213 08:52:10.273308   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:10.273623   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:10.273686   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:10.773580   47783 type.go:168] "Request Body" body=""
	I1213 08:52:10.773661   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:10.773990   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:11.273663   47783 type.go:168] "Request Body" body=""
	I1213 08:52:11.273742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:11.274065   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:11.773833   47783 type.go:168] "Request Body" body=""
	I1213 08:52:11.773902   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:11.774166   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:12.273942   47783 type.go:168] "Request Body" body=""
	I1213 08:52:12.274016   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:12.274348   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:12.274403   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:12.774024   47783 type.go:168] "Request Body" body=""
	I1213 08:52:12.774098   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:12.774431   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:13.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:13.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:13.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:13.773196   47783 type.go:168] "Request Body" body=""
	I1213 08:52:13.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:13.773598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:14.273324   47783 type.go:168] "Request Body" body=""
	I1213 08:52:14.273404   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:14.273738   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:14.773419   47783 type.go:168] "Request Body" body=""
	I1213 08:52:14.773491   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:14.773809   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:14.773870   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:15.273604   47783 type.go:168] "Request Body" body=""
	I1213 08:52:15.273678   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:15.273970   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:15.773929   47783 type.go:168] "Request Body" body=""
	I1213 08:52:15.774005   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:15.774334   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:16.273966   47783 type.go:168] "Request Body" body=""
	I1213 08:52:16.274045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:16.274328   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:16.773036   47783 type.go:168] "Request Body" body=""
	I1213 08:52:16.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:16.773432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:17.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:52:17.273192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:17.273578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:17.273639   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:17.772950   47783 type.go:168] "Request Body" body=""
	I1213 08:52:17.773025   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:17.773278   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:18.274027   47783 type.go:168] "Request Body" body=""
	I1213 08:52:18.274101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:18.274430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:18.772996   47783 type.go:168] "Request Body" body=""
	I1213 08:52:18.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:18.773402   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:19.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:52:19.273152   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:19.273456   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:19.773356   47783 type.go:168] "Request Body" body=""
	I1213 08:52:19.773432   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:19.773764   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:19.773818   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:20.273485   47783 type.go:168] "Request Body" body=""
	I1213 08:52:20.273567   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:20.273890   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:20.773865   47783 type.go:168] "Request Body" body=""
	I1213 08:52:20.773932   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:20.774231   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:21.273999   47783 type.go:168] "Request Body" body=""
	I1213 08:52:21.274069   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:21.274395   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:21.772998   47783 type.go:168] "Request Body" body=""
	I1213 08:52:21.773076   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:21.773386   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:22.273028   47783 type.go:168] "Request Body" body=""
	I1213 08:52:22.273101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:22.273413   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:22.273462   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:22.773146   47783 type.go:168] "Request Body" body=""
	I1213 08:52:22.773221   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:22.773559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:23.273239   47783 type.go:168] "Request Body" body=""
	I1213 08:52:23.273317   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:23.273630   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:23.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:52:23.773089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:23.773346   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:24.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:52:24.273175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:24.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:24.273551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:24.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:52:24.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:24.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:25.273040   47783 type.go:168] "Request Body" body=""
	I1213 08:52:25.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:25.273374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:25.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:52:25.773304   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:25.773625   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:26.273113   47783 type.go:168] "Request Body" body=""
	I1213 08:52:26.273187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:26.273523   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:26.273583   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:26.773246   47783 type.go:168] "Request Body" body=""
	I1213 08:52:26.773317   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:26.773577   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:27.273252   47783 type.go:168] "Request Body" body=""
	I1213 08:52:27.273326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:27.273645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:27.773350   47783 type.go:168] "Request Body" body=""
	I1213 08:52:27.773425   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:27.773742   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:28.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:52:28.273167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:28.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:28.773080   47783 type.go:168] "Request Body" body=""
	I1213 08:52:28.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:28.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:28.773541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:29.273204   47783 type.go:168] "Request Body" body=""
	I1213 08:52:29.273284   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:29.273624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:29.773318   47783 type.go:168] "Request Body" body=""
	I1213 08:52:29.773387   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:29.773650   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:30.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:30.273174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:30.273487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:30.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:52:30.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:30.773484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:31.273046   47783 type.go:168] "Request Body" body=""
	I1213 08:52:31.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:31.273373   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:31.273414   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:31.773116   47783 type.go:168] "Request Body" body=""
	I1213 08:52:31.773190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:31.773520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:32.273103   47783 type.go:168] "Request Body" body=""
	I1213 08:52:32.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:32.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:32.773017   47783 type.go:168] "Request Body" body=""
	I1213 08:52:32.773087   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:32.773337   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:33.273004   47783 type.go:168] "Request Body" body=""
	I1213 08:52:33.273082   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:33.273417   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:33.273472   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:33.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:52:33.773088   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:33.773405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:34.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:52:34.273120   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:34.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:34.773152   47783 type.go:168] "Request Body" body=""
	I1213 08:52:34.773227   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:34.773558   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:35.273270   47783 type.go:168] "Request Body" body=""
	I1213 08:52:35.273350   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:35.273680   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:35.273739   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:35.773621   47783 type.go:168] "Request Body" body=""
	I1213 08:52:35.773688   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:35.773944   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:36.273767   47783 type.go:168] "Request Body" body=""
	I1213 08:52:36.273909   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:36.274226   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:36.773955   47783 type.go:168] "Request Body" body=""
	I1213 08:52:36.774030   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:36.774359   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:37.273017   47783 type.go:168] "Request Body" body=""
	I1213 08:52:37.273085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:37.273361   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:37.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:52:37.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:37.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:37.773513   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:38.273181   47783 type.go:168] "Request Body" body=""
	I1213 08:52:38.273262   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:38.273612   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:38.773291   47783 type.go:168] "Request Body" body=""
	I1213 08:52:38.773360   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:38.773665   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:39.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:52:39.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:39.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:39.773368   47783 type.go:168] "Request Body" body=""
	I1213 08:52:39.773449   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:39.773774   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:39.773830   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:40.273560   47783 type.go:168] "Request Body" body=""
	I1213 08:52:40.273630   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:40.273887   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:40.773805   47783 type.go:168] "Request Body" body=""
	I1213 08:52:40.773877   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:40.774208   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:41.273838   47783 type.go:168] "Request Body" body=""
	I1213 08:52:41.273922   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:41.274250   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:41.774001   47783 type.go:168] "Request Body" body=""
	I1213 08:52:41.774079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:41.774332   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:41.774381   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:42.273093   47783 type.go:168] "Request Body" body=""
	I1213 08:52:42.273177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:42.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:42.773228   47783 type.go:168] "Request Body" body=""
	I1213 08:52:42.773303   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:42.773598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:43.273031   47783 type.go:168] "Request Body" body=""
	I1213 08:52:43.273104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:43.273352   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:43.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:52:43.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:43.773424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:44.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:44.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:44.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:44.273560   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:44.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:52:44.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:44.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:45.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:52:45.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:45.273519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:45.773133   47783 type.go:168] "Request Body" body=""
	I1213 08:52:45.773210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:45.773516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:46.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:52:46.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:46.273568   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:46.273635   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:46.773088   47783 type.go:168] "Request Body" body=""
	I1213 08:52:46.773186   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:46.773520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:47.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:52:47.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:47.273491   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:47.774025   47783 type.go:168] "Request Body" body=""
	I1213 08:52:47.774092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:47.774383   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:48.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:52:48.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:48.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:48.773232   47783 type.go:168] "Request Body" body=""
	I1213 08:52:48.773322   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:48.773644   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:48.773698   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:49.273023   47783 type.go:168] "Request Body" body=""
	I1213 08:52:49.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:49.273414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:49.773139   47783 type.go:168] "Request Body" body=""
	I1213 08:52:49.773208   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:49.773528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:50.273274   47783 type.go:168] "Request Body" body=""
	I1213 08:52:50.273347   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:50.273702   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:50.773600   47783 type.go:168] "Request Body" body=""
	I1213 08:52:50.773672   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:50.773927   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:50.773976   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:51.273754   47783 type.go:168] "Request Body" body=""
	I1213 08:52:51.273831   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:51.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:51.773932   47783 type.go:168] "Request Body" body=""
	I1213 08:52:51.774002   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:51.774339   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:52.273970   47783 type.go:168] "Request Body" body=""
	I1213 08:52:52.274046   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:52.274330   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:52.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:52:52.773127   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:52.773475   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:53.273012   47783 type.go:168] "Request Body" body=""
	I1213 08:52:53.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:53.273405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:53.273464   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:53.773963   47783 type.go:168] "Request Body" body=""
	I1213 08:52:53.774030   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:53.774299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:54.272999   47783 type.go:168] "Request Body" body=""
	I1213 08:52:54.273074   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:54.273401   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:54.773125   47783 type.go:168] "Request Body" body=""
	I1213 08:52:54.773203   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:54.773544   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:55.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:52:55.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:55.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:55.773100   47783 type.go:168] "Request Body" body=""
	I1213 08:52:55.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:55.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:55.773542   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:56.273181   47783 type.go:168] "Request Body" body=""
	I1213 08:52:56.273253   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:56.273578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:56.773148   47783 type.go:168] "Request Body" body=""
	I1213 08:52:56.773218   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:56.773518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:57.273086   47783 type.go:168] "Request Body" body=""
	I1213 08:52:57.273163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:57.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:57.773089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:57.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:57.773480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:58.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:52:58.273119   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:58.273451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:58.273511   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:58.773075   47783 type.go:168] "Request Body" body=""
	I1213 08:52:58.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:58.773467   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:59.273192   47783 type.go:168] "Request Body" body=""
	I1213 08:52:59.273273   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:59.273605   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:59.773326   47783 type.go:168] "Request Body" body=""
	I1213 08:52:59.773396   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:59.773672   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:00.273172   47783 type.go:168] "Request Body" body=""
	I1213 08:53:00.273263   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:00.273742   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:00.273809   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:00.773616   47783 type.go:168] "Request Body" body=""
	I1213 08:53:00.773702   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:00.774032   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:01.273744   47783 type.go:168] "Request Body" body=""
	I1213 08:53:01.273814   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:01.274061   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:01.773849   47783 type.go:168] "Request Body" body=""
	I1213 08:53:01.773919   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:01.774245   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:02.272981   47783 type.go:168] "Request Body" body=""
	I1213 08:53:02.273054   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:02.273367   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:02.773046   47783 type.go:168] "Request Body" body=""
	I1213 08:53:02.773118   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:02.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:02.773469   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:03.273079   47783 type.go:168] "Request Body" body=""
	I1213 08:53:03.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:03.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:03.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:53:03.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:03.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:04.273172   47783 type.go:168] "Request Body" body=""
	I1213 08:53:04.273245   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:04.273563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:04.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:04.773147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:04.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:04.773531   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:05.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:53:05.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:05.273471   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:05.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:53:05.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:05.773436   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:06.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:53:06.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:06.273509   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:06.773214   47783 type.go:168] "Request Body" body=""
	I1213 08:53:06.773296   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:06.773613   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:06.773678   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:07.273027   47783 type.go:168] "Request Body" body=""
	I1213 08:53:07.273103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:07.273397   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:07.773074   47783 type.go:168] "Request Body" body=""
	I1213 08:53:07.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:07.773484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:08.273184   47783 type.go:168] "Request Body" body=""
	I1213 08:53:08.273274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:08.273603   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:08.773033   47783 type.go:168] "Request Body" body=""
	I1213 08:53:08.773101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:08.773392   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:09.273719   47783 type.go:168] "Request Body" body=""
	I1213 08:53:09.273791   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:09.274102   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:09.274155   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:09.773020   47783 type.go:168] "Request Body" body=""
	I1213 08:53:09.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:09.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:10.273187   47783 type.go:168] "Request Body" body=""
	I1213 08:53:10.273265   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:10.273558   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:10.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:53:10.773290   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:10.773639   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:11.273348   47783 type.go:168] "Request Body" body=""
	I1213 08:53:11.273428   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:11.273762   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:11.773332   47783 type.go:168] "Request Body" body=""
	I1213 08:53:11.773399   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:11.773706   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:11.773757   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:12.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:53:12.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:12.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:12.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:12.773181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:12.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:13.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:13.273148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:13.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:13.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:53:13.773154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:13.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:14.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:53:14.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:14.273486   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:14.273541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:14.773070   47783 type.go:168] "Request Body" body=""
	I1213 08:53:14.773144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:14.773470   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:15.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:53:15.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:15.273487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:15.773087   47783 type.go:168] "Request Body" body=""
	I1213 08:53:15.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:15.773478   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:16.272941   47783 type.go:168] "Request Body" body=""
	I1213 08:53:16.273007   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:16.273251   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:16.774037   47783 type.go:168] "Request Body" body=""
	I1213 08:53:16.774124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:16.774507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:16.774560   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:17.273219   47783 type.go:168] "Request Body" body=""
	I1213 08:53:17.273292   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:17.273621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:17.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:53:17.773095   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:17.773354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:18.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:18.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:18.273472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:18.773188   47783 type.go:168] "Request Body" body=""
	I1213 08:53:18.773268   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:18.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:19.272964   47783 type.go:168] "Request Body" body=""
	I1213 08:53:19.273032   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:19.273352   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:19.273401   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:19.773060   47783 type.go:168] "Request Body" body=""
	I1213 08:53:19.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:19.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:20.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:53:20.273152   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:20.273506   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:20.773068   47783 type.go:168] "Request Body" body=""
	I1213 08:53:20.773137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:20.773437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:21.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:53:21.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:21.273479   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:21.273526   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:21.773202   47783 type.go:168] "Request Body" body=""
	I1213 08:53:21.773283   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:21.773621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:22.273008   47783 type.go:168] "Request Body" body=""
	I1213 08:53:22.273079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:22.273390   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:22.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:53:22.773136   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:22.773465   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:23.273163   47783 type.go:168] "Request Body" body=""
	I1213 08:53:23.273237   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:23.273573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:23.273630   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:23.772972   47783 type.go:168] "Request Body" body=""
	I1213 08:53:23.773041   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:23.773344   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:24.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:53:24.273160   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:24.273485   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:24.773194   47783 type.go:168] "Request Body" body=""
	I1213 08:53:24.773266   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:24.773573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:25.273008   47783 type.go:168] "Request Body" body=""
	I1213 08:53:25.273080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:25.273326   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:25.773135   47783 type.go:168] "Request Body" body=""
	I1213 08:53:25.773214   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:25.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:25.773610   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:26.273236   47783 type.go:168] "Request Body" body=""
	I1213 08:53:26.273334   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:26.273645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:26.773015   47783 type.go:168] "Request Body" body=""
	I1213 08:53:26.773082   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:26.773414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:27.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:27.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:27.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:27.773212   47783 type.go:168] "Request Body" body=""
	I1213 08:53:27.773288   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:27.773611   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:27.773662   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:28.273082   47783 type.go:168] "Request Body" body=""
	I1213 08:53:28.273158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:28.273448   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:28.773085   47783 type.go:168] "Request Body" body=""
	I1213 08:53:28.773154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:28.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:29.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:53:29.273196   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:29.273483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:29.773239   47783 type.go:168] "Request Body" body=""
	I1213 08:53:29.773318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:29.773573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:30.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:53:30.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:30.273542   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:30.273593   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:30.773082   47783 type.go:168] "Request Body" body=""
	I1213 08:53:30.773174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:30.773474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:31.273124   47783 type.go:168] "Request Body" body=""
	I1213 08:53:31.273195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:31.273460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:31.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:53:31.773163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:31.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:32.273159   47783 type.go:168] "Request Body" body=""
	I1213 08:53:32.273240   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:32.273577   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:32.273635   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:32.772992   47783 type.go:168] "Request Body" body=""
	I1213 08:53:32.773056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:32.773301   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:33.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:53:33.273144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:33.273448   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:33.773096   47783 type.go:168] "Request Body" body=""
	I1213 08:53:33.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:33.773514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:34.273042   47783 type.go:168] "Request Body" body=""
	I1213 08:53:34.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:34.273432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:34.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:53:34.773216   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:34.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:34.773545   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:35.273073   47783 type.go:168] "Request Body" body=""
	I1213 08:53:35.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:35.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:35.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:53:35.773105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:35.773404   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:36.273023   47783 type.go:168] "Request Body" body=""
	I1213 08:53:36.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:36.273475   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:36.773061   47783 type.go:168] "Request Body" body=""
	I1213 08:53:36.773145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:36.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:37.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:37.273135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:37.273382   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:37.273421   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:37.773097   47783 type.go:168] "Request Body" body=""
	I1213 08:53:37.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:37.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:38.273189   47783 type.go:168] "Request Body" body=""
	I1213 08:53:38.273267   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:38.273579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:38.773028   47783 type.go:168] "Request Body" body=""
	I1213 08:53:38.773102   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:38.773347   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:39.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:53:39.273143   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:39.273470   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:39.273519   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:39.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:53:39.773288   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:39.773599   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:40.273048   47783 type.go:168] "Request Body" body=""
	I1213 08:53:40.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:40.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:40.773077   47783 type.go:168] "Request Body" body=""
	I1213 08:53:40.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:40.773508   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:41.273216   47783 type.go:168] "Request Body" body=""
	I1213 08:53:41.273298   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:41.273616   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:41.273672   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:41.772986   47783 type.go:168] "Request Body" body=""
	I1213 08:53:41.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:41.773356   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:42.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:42.273143   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:42.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:42.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:42.773184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:42.773518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:43.273053   47783 type.go:168] "Request Body" body=""
	I1213 08:53:43.273132   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:43.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:43.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:43.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:43.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:43.773551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:44.273243   47783 type.go:168] "Request Body" body=""
	I1213 08:53:44.273326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:44.273652   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:44.773050   47783 type.go:168] "Request Body" body=""
	I1213 08:53:44.773126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:44.773461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:45.273208   47783 type.go:168] "Request Body" body=""
	I1213 08:53:45.273289   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:45.273720   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:45.773581   47783 type.go:168] "Request Body" body=""
	I1213 08:53:45.773651   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:45.773963   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:45.774017   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:46.275629   47783 type.go:168] "Request Body" body=""
	I1213 08:53:46.275703   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:46.275961   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:46.773754   47783 type.go:168] "Request Body" body=""
	I1213 08:53:46.773827   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:46.774161   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:47.273968   47783 type.go:168] "Request Body" body=""
	I1213 08:53:47.274043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:47.274351   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:47.773011   47783 type.go:168] "Request Body" body=""
	I1213 08:53:47.773096   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:47.773357   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:48.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:53:48.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:48.273530   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:48.273587   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:48.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:48.773183   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:48.773523   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:49.273149   47783 type.go:168] "Request Body" body=""
	I1213 08:53:49.273234   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:49.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:49.773403   47783 type.go:168] "Request Body" body=""
	I1213 08:53:49.773482   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:49.773819   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:50.273603   47783 type.go:168] "Request Body" body=""
	I1213 08:53:50.273680   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:50.273953   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:50.273999   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:50.773802   47783 type.go:168] "Request Body" body=""
	I1213 08:53:50.773880   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:50.774158   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:51.273956   47783 type.go:168] "Request Body" body=""
	I1213 08:53:51.274028   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:51.274317   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:51.772988   47783 type.go:168] "Request Body" body=""
	I1213 08:53:51.773063   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:51.773397   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:52.273098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:52.273170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:52.273477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:52.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:53:52.773187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:52.773515   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:52.773572   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:53.273241   47783 type.go:168] "Request Body" body=""
	I1213 08:53:53.273318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:53.273661   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:53.773054   47783 type.go:168] "Request Body" body=""
	I1213 08:53:53.773123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:53.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:54.273076   47783 type.go:168] "Request Body" body=""
	I1213 08:53:54.273156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:54.273507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:54.773088   47783 type.go:168] "Request Body" body=""
	I1213 08:53:54.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:54.773454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:55.273120   47783 type.go:168] "Request Body" body=""
	I1213 08:53:55.273215   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:55.273569   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:55.273618   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:55.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:55.773179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:55.773491   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:56.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:53:56.273170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:56.273651   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:56.773063   47783 type.go:168] "Request Body" body=""
	I1213 08:53:56.773135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:56.773385   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:57.273085   47783 type.go:168] "Request Body" body=""
	I1213 08:53:57.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:57.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:57.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:53:57.773297   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:57.773605   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:57.773654   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:58.273317   47783 type.go:168] "Request Body" body=""
	I1213 08:53:58.273402   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:58.273675   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:58.773360   47783 type.go:168] "Request Body" body=""
	I1213 08:53:58.773432   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:58.773734   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:59.273441   47783 type.go:168] "Request Body" body=""
	I1213 08:53:59.273519   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:59.273831   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:59.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:53:59.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:59.773701   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:59.773764   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:54:00.273228   47783 type.go:168] "Request Body" body=""
	I1213 08:54:00.273367   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:54:00.273821   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:54:00.773873   47783 type.go:168] "Request Body" body=""
	I1213 08:54:00.773932   47783 node_ready.go:38] duration metric: took 6m0.00107019s for node "functional-074420" to be "Ready" ...
	I1213 08:54:00.777004   47783 out.go:203] 
	W1213 08:54:00.779921   47783 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 08:54:00.779957   47783 out.go:285] * 
	* 
	W1213 08:54:00.782360   47783 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:54:00.785205   47783 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-074420 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.053200106s for "functional-074420" cluster.
I1213 08:54:01.380133    4120 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	        "Created": "2025-12-13T08:39:40.050933605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:39:40.114566965Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
	        "Name": "/functional-074420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	                "LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-074420",
	                "Source": "/var/lib/docker/volumes/functional-074420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074420",
	                "name.minikube.sigs.k8s.io": "functional-074420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
	            "SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e0:c5:f8:aa:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
	                    "EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074420",
	                        "662fb3d52b3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 2 (393.958594ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-074420 logs -n 25: (1.143215107s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-049633 ssh findmnt -T /mount-9p | grep 9p                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ ssh            │ functional-049633 ssh findmnt -T /mount-9p | grep 9p                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh -- ls -la /mount-9p                                                                                                               │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh sudo umount -f /mount-9p                                                                                                          │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ mount          │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount1 --alsologtostderr -v=1                                      │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ mount          │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount3 --alsologtostderr -v=1                                      │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ mount          │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount2 --alsologtostderr -v=1                                      │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ ssh            │ functional-049633 ssh findmnt -T /mount1                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ update-context │ functional-049633 update-context --alsologtostderr -v=2                                                                                                 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ update-context │ functional-049633 update-context --alsologtostderr -v=2                                                                                                 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ update-context │ functional-049633 update-context --alsologtostderr -v=2                                                                                                 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image ls --format short --alsologtostderr                                                                                             │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh findmnt -T /mount1                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh pgrep buildkitd                                                                                                                   │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ ssh            │ functional-049633 ssh findmnt -T /mount2                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh findmnt -T /mount3                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image build -t localhost/my-image:functional-049633 testdata/build --alsologtostderr                                                  │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ mount          │ -p functional-049633 --kill=true                                                                                                                        │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ image          │ functional-049633 image ls --format yaml --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image ls --format json --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image ls --format table --alsologtostderr                                                                                             │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image ls                                                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ delete         │ -p functional-049633                                                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ start          │ -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ start          │ -p functional-074420 --alsologtostderr -v=8                                                                                                             │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:47 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:47:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:47:55.372522   47783 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:47:55.372733   47783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:47:55.372759   47783 out.go:374] Setting ErrFile to fd 2...
	I1213 08:47:55.372779   47783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:47:55.373071   47783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:47:55.373500   47783 out.go:368] Setting JSON to false
	I1213 08:47:55.374339   47783 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1828,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:47:55.374435   47783 start.go:143] virtualization:  
	I1213 08:47:55.378014   47783 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:47:55.381059   47783 notify.go:221] Checking for updates...
	I1213 08:47:55.381456   47783 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:47:55.384645   47783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:47:55.387475   47783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:55.390285   47783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:47:55.393179   47783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 08:47:55.396170   47783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:47:55.399625   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:55.399723   47783 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:47:55.421152   47783 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:47:55.421278   47783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:47:55.479286   47783 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 08:47:55.469949512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:47:55.479410   47783 docker.go:319] overlay module found
	I1213 08:47:55.482469   47783 out.go:179] * Using the docker driver based on existing profile
	I1213 08:47:55.485237   47783 start.go:309] selected driver: docker
	I1213 08:47:55.485259   47783 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:55.485359   47783 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:47:55.485469   47783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:47:55.552137   47783 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 08:47:55.542465837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:47:55.552549   47783 cni.go:84] Creating CNI manager for ""
	I1213 08:47:55.552614   47783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:47:55.552664   47783 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:55.555904   47783 out.go:179] * Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	I1213 08:47:55.558801   47783 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 08:47:55.561846   47783 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:47:55.564866   47783 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:47:55.564922   47783 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 08:47:55.564938   47783 cache.go:65] Caching tarball of preloaded images
	I1213 08:47:55.564963   47783 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:47:55.565027   47783 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 08:47:55.565039   47783 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 08:47:55.565188   47783 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
	I1213 08:47:55.585020   47783 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:47:55.585044   47783 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:47:55.585064   47783 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:47:55.585094   47783 start.go:360] acquireMachinesLock for functional-074420: {Name:mk9a8356bf81e58530d2c2996b4da0b7487171c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:47:55.585169   47783 start.go:364] duration metric: took 45.161µs to acquireMachinesLock for "functional-074420"
	I1213 08:47:55.585195   47783 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:47:55.585204   47783 fix.go:54] fixHost starting: 
	I1213 08:47:55.585456   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:55.601925   47783 fix.go:112] recreateIfNeeded on functional-074420: state=Running err=<nil>
	W1213 08:47:55.601956   47783 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:47:55.605110   47783 out.go:252] * Updating the running docker "functional-074420" container ...
	I1213 08:47:55.605143   47783 machine.go:94] provisionDockerMachine start ...
	I1213 08:47:55.605228   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.622184   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.622521   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.622536   47783 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:47:55.770899   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:47:55.770923   47783 ubuntu.go:182] provisioning hostname "functional-074420"
	I1213 08:47:55.770990   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.788917   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.789224   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.789243   47783 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-074420 && echo "functional-074420" | sudo tee /etc/hostname
	I1213 08:47:55.944141   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:47:55.944216   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.963276   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.963669   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.963693   47783 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-074420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-074420/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-074420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:47:56.123813   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:47:56.123839   47783 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 08:47:56.123865   47783 ubuntu.go:190] setting up certificates
	I1213 08:47:56.123875   47783 provision.go:84] configureAuth start
	I1213 08:47:56.123935   47783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:47:56.141934   47783 provision.go:143] copyHostCerts
	I1213 08:47:56.141983   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:47:56.142030   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 08:47:56.142044   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:47:56.142121   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 08:47:56.142216   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:47:56.142238   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 08:47:56.142247   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:47:56.142276   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 08:47:56.142329   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:47:56.142361   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 08:47:56.142370   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:47:56.142397   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 08:47:56.142457   47783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.functional-074420 san=[127.0.0.1 192.168.49.2 functional-074420 localhost minikube]
	I1213 08:47:56.320875   47783 provision.go:177] copyRemoteCerts
	I1213 08:47:56.320949   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:47:56.320994   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.338054   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.442993   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 08:47:56.443052   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:47:56.459467   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 08:47:56.459650   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:47:56.476836   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 08:47:56.476894   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 08:47:56.494408   47783 provision.go:87] duration metric: took 370.509157ms to configureAuth
	I1213 08:47:56.494435   47783 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:47:56.494611   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:56.494624   47783 machine.go:97] duration metric: took 889.474725ms to provisionDockerMachine
	I1213 08:47:56.494633   47783 start.go:293] postStartSetup for "functional-074420" (driver="docker")
	I1213 08:47:56.494644   47783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:47:56.494700   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:47:56.494748   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.511710   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.615158   47783 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:47:56.618357   47783 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 08:47:56.618378   47783 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 08:47:56.618383   47783 command_runner.go:130] > VERSION_ID="12"
	I1213 08:47:56.618388   47783 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 08:47:56.618392   47783 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 08:47:56.618422   47783 command_runner.go:130] > ID=debian
	I1213 08:47:56.618436   47783 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 08:47:56.618441   47783 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 08:47:56.618448   47783 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 08:47:56.618517   47783 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:47:56.618537   47783 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:47:56.618550   47783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 08:47:56.618607   47783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 08:47:56.618691   47783 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 08:47:56.618702   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> /etc/ssl/certs/41202.pem
	I1213 08:47:56.618783   47783 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1213 08:47:56.618792   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> /etc/test/nested/copy/4120/hosts
	I1213 08:47:56.618842   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1213 08:47:56.626162   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:47:56.643608   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1213 08:47:56.661460   47783 start.go:296] duration metric: took 166.811201ms for postStartSetup
	I1213 08:47:56.661553   47783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:47:56.661603   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.678627   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.785005   47783 command_runner.go:130] > 14%
	I1213 08:47:56.785418   47783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:47:56.789762   47783 command_runner.go:130] > 169G
	I1213 08:47:56.790146   47783 fix.go:56] duration metric: took 1.204938515s for fixHost
	I1213 08:47:56.790168   47783 start.go:83] releasing machines lock for "functional-074420", held for 1.204983079s
	I1213 08:47:56.790231   47783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:47:56.811813   47783 ssh_runner.go:195] Run: cat /version.json
	I1213 08:47:56.811877   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.812180   47783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:47:56.812227   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.839131   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.843453   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:57.035647   47783 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 08:47:57.038511   47783 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 08:47:57.038690   47783 ssh_runner.go:195] Run: systemctl --version
	I1213 08:47:57.044708   47783 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 08:47:57.044761   47783 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 08:47:57.045134   47783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 08:47:57.049401   47783 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 08:47:57.049443   47783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:47:57.049503   47783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:47:57.057127   47783 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:47:57.057158   47783 start.go:496] detecting cgroup driver to use...
	I1213 08:47:57.057211   47783 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:47:57.057279   47783 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 08:47:57.072743   47783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:47:57.086014   47783 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:47:57.086118   47783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:47:57.102029   47783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:47:57.115088   47783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:47:57.226726   47783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:47:57.347870   47783 docker.go:234] disabling docker service ...
	I1213 08:47:57.347940   47783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:47:57.363202   47783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:47:57.377010   47783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:47:57.506500   47783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:47:57.649131   47783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:47:57.662497   47783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:47:57.677018   47783 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 08:47:57.678207   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:47:57.688555   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 08:47:57.698272   47783 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:47:57.698370   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:47:57.707500   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:47:57.716692   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:47:57.725739   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:47:57.734886   47783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:47:57.743485   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:47:57.753073   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:47:57.761993   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:47:57.770719   47783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:47:57.777695   47783 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 08:47:57.778683   47783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:47:57.786237   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:57.908393   47783 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:47:58.046253   47783 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 08:47:58.046368   47783 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 08:47:58.050493   47783 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 08:47:58.050558   47783 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 08:47:58.050578   47783 command_runner.go:130] > Device: 0,73	Inode: 1610        Links: 1
	I1213 08:47:58.050603   47783 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:47:58.050636   47783 command_runner.go:130] > Access: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050663   47783 command_runner.go:130] > Modify: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050685   47783 command_runner.go:130] > Change: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050720   47783 command_runner.go:130] >  Birth: -
	I1213 08:47:58.050927   47783 start.go:564] Will wait 60s for crictl version
	I1213 08:47:58.051002   47783 ssh_runner.go:195] Run: which crictl
	I1213 08:47:58.054661   47783 command_runner.go:130] > /usr/local/bin/crictl
	I1213 08:47:58.054852   47783 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:47:58.077876   47783 command_runner.go:130] > Version:  0.1.0
	I1213 08:47:58.077939   47783 command_runner.go:130] > RuntimeName:  containerd
	I1213 08:47:58.077961   47783 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 08:47:58.077985   47783 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 08:47:58.080051   47783 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 08:47:58.080159   47783 ssh_runner.go:195] Run: containerd --version
	I1213 08:47:58.100302   47783 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 08:47:58.101953   47783 ssh_runner.go:195] Run: containerd --version
	I1213 08:47:58.119235   47783 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 08:47:58.126521   47783 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 08:47:58.129463   47783 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:47:58.145273   47783 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 08:47:58.149369   47783 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 08:47:58.149453   47783 kubeadm.go:884] updating cluster {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:47:58.149580   47783 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:47:58.149657   47783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:47:58.174191   47783 command_runner.go:130] > {
	I1213 08:47:58.174214   47783 command_runner.go:130] >   "images":  [
	I1213 08:47:58.174219   47783 command_runner.go:130] >     {
	I1213 08:47:58.174232   47783 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 08:47:58.174237   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174242   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 08:47:58.174246   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174250   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174259   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 08:47:58.174263   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174267   47783 command_runner.go:130] >       "size":  "40636774",
	I1213 08:47:58.174271   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174275   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174278   47783 command_runner.go:130] >     },
	I1213 08:47:58.174281   47783 command_runner.go:130] >     {
	I1213 08:47:58.174289   47783 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 08:47:58.174299   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174305   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 08:47:58.174308   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174313   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174321   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 08:47:58.174328   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174332   47783 command_runner.go:130] >       "size":  "8034419",
	I1213 08:47:58.174336   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174340   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174343   47783 command_runner.go:130] >     },
	I1213 08:47:58.174349   47783 command_runner.go:130] >     {
	I1213 08:47:58.174356   47783 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 08:47:58.174361   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174366   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 08:47:58.174371   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174384   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174395   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 08:47:58.174399   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174403   47783 command_runner.go:130] >       "size":  "21168808",
	I1213 08:47:58.174409   47783 command_runner.go:130] >       "username":  "nonroot",
	I1213 08:47:58.174417   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174421   47783 command_runner.go:130] >     },
	I1213 08:47:58.174424   47783 command_runner.go:130] >     {
	I1213 08:47:58.174430   47783 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 08:47:58.174436   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174441   47783 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 08:47:58.174444   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174449   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174458   47783 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 08:47:58.174464   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174468   47783 command_runner.go:130] >       "size":  "21136588",
	I1213 08:47:58.174472   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174475   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174478   47783 command_runner.go:130] >       },
	I1213 08:47:58.174487   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174491   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174494   47783 command_runner.go:130] >     },
	I1213 08:47:58.174497   47783 command_runner.go:130] >     {
	I1213 08:47:58.174507   47783 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 08:47:58.174511   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174518   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 08:47:58.174522   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174526   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174533   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 08:47:58.174539   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174545   47783 command_runner.go:130] >       "size":  "24678359",
	I1213 08:47:58.174551   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174559   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174562   47783 command_runner.go:130] >       },
	I1213 08:47:58.174566   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174576   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174580   47783 command_runner.go:130] >     },
	I1213 08:47:58.174584   47783 command_runner.go:130] >     {
	I1213 08:47:58.174594   47783 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 08:47:58.174601   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174607   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 08:47:58.174610   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174614   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174625   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 08:47:58.174631   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174635   47783 command_runner.go:130] >       "size":  "20661043",
	I1213 08:47:58.174638   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174642   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174645   47783 command_runner.go:130] >       },
	I1213 08:47:58.174649   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174655   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174659   47783 command_runner.go:130] >     },
	I1213 08:47:58.174663   47783 command_runner.go:130] >     {
	I1213 08:47:58.174671   47783 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 08:47:58.174677   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174681   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 08:47:58.174684   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174688   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174699   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 08:47:58.174704   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174709   47783 command_runner.go:130] >       "size":  "22429671",
	I1213 08:47:58.174713   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174716   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174721   47783 command_runner.go:130] >     },
	I1213 08:47:58.174725   47783 command_runner.go:130] >     {
	I1213 08:47:58.174732   47783 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 08:47:58.174742   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174747   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 08:47:58.174753   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174757   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174765   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 08:47:58.174774   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174781   47783 command_runner.go:130] >       "size":  "15391364",
	I1213 08:47:58.174784   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174788   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174797   47783 command_runner.go:130] >       },
	I1213 08:47:58.174802   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174808   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174811   47783 command_runner.go:130] >     },
	I1213 08:47:58.174814   47783 command_runner.go:130] >     {
	I1213 08:47:58.174821   47783 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 08:47:58.174828   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174833   47783 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 08:47:58.174836   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174840   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174848   47783 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 08:47:58.174851   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174855   47783 command_runner.go:130] >       "size":  "267939",
	I1213 08:47:58.174860   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174864   47783 command_runner.go:130] >         "value":  "65535"
	I1213 08:47:58.174875   47783 command_runner.go:130] >       },
	I1213 08:47:58.174880   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174884   47783 command_runner.go:130] >       "pinned":  true
	I1213 08:47:58.174887   47783 command_runner.go:130] >     }
	I1213 08:47:58.174890   47783 command_runner.go:130] >   ]
	I1213 08:47:58.174893   47783 command_runner.go:130] > }
	I1213 08:47:58.175043   47783 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:47:58.175056   47783 containerd.go:534] Images already preloaded, skipping extraction
	I1213 08:47:58.175117   47783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:47:58.196592   47783 command_runner.go:130] > {
	I1213 08:47:58.196612   47783 command_runner.go:130] >   "images":  [
	I1213 08:47:58.196616   47783 command_runner.go:130] >     {
	I1213 08:47:58.196626   47783 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 08:47:58.196631   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196637   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 08:47:58.196641   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196644   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196654   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 08:47:58.196660   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196664   47783 command_runner.go:130] >       "size":  "40636774",
	I1213 08:47:58.196674   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196678   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196682   47783 command_runner.go:130] >     },
	I1213 08:47:58.196685   47783 command_runner.go:130] >     {
	I1213 08:47:58.196701   47783 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 08:47:58.196710   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196715   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 08:47:58.196719   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196723   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196732   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 08:47:58.196739   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196745   47783 command_runner.go:130] >       "size":  "8034419",
	I1213 08:47:58.196753   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196757   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196764   47783 command_runner.go:130] >     },
	I1213 08:47:58.196768   47783 command_runner.go:130] >     {
	I1213 08:47:58.196782   47783 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 08:47:58.196787   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196793   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 08:47:58.196798   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196807   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196825   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 08:47:58.196833   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196838   47783 command_runner.go:130] >       "size":  "21168808",
	I1213 08:47:58.196847   47783 command_runner.go:130] >       "username":  "nonroot",
	I1213 08:47:58.196852   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196861   47783 command_runner.go:130] >     },
	I1213 08:47:58.196864   47783 command_runner.go:130] >     {
	I1213 08:47:58.196871   47783 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 08:47:58.196875   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196880   47783 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 08:47:58.196884   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196888   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196897   47783 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 08:47:58.196904   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196908   47783 command_runner.go:130] >       "size":  "21136588",
	I1213 08:47:58.196912   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.196916   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.196924   47783 command_runner.go:130] >       },
	I1213 08:47:58.196929   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196936   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196940   47783 command_runner.go:130] >     },
	I1213 08:47:58.196943   47783 command_runner.go:130] >     {
	I1213 08:47:58.196953   47783 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 08:47:58.196958   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196963   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 08:47:58.196968   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196973   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196984   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 08:47:58.196993   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196998   47783 command_runner.go:130] >       "size":  "24678359",
	I1213 08:47:58.197005   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197015   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197022   47783 command_runner.go:130] >       },
	I1213 08:47:58.197030   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197034   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197037   47783 command_runner.go:130] >     },
	I1213 08:47:58.197040   47783 command_runner.go:130] >     {
	I1213 08:47:58.197048   47783 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 08:47:58.197056   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197063   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 08:47:58.197069   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197074   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197086   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 08:47:58.197094   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197098   47783 command_runner.go:130] >       "size":  "20661043",
	I1213 08:47:58.197105   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197109   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197113   47783 command_runner.go:130] >       },
	I1213 08:47:58.197117   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197121   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197126   47783 command_runner.go:130] >     },
	I1213 08:47:58.197129   47783 command_runner.go:130] >     {
	I1213 08:47:58.197140   47783 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 08:47:58.197144   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197154   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 08:47:58.197158   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197162   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197173   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 08:47:58.197180   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197185   47783 command_runner.go:130] >       "size":  "22429671",
	I1213 08:47:58.197189   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197194   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197198   47783 command_runner.go:130] >     },
	I1213 08:47:58.197201   47783 command_runner.go:130] >     {
	I1213 08:47:58.197209   47783 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 08:47:58.197216   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197225   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 08:47:58.197232   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197237   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197248   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 08:47:58.197255   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197259   47783 command_runner.go:130] >       "size":  "15391364",
	I1213 08:47:58.197266   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197270   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197273   47783 command_runner.go:130] >       },
	I1213 08:47:58.197279   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197283   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197286   47783 command_runner.go:130] >     },
	I1213 08:47:58.197289   47783 command_runner.go:130] >     {
	I1213 08:47:58.197296   47783 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 08:47:58.197304   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197309   47783 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 08:47:58.197313   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197320   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197329   47783 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 08:47:58.197335   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197339   47783 command_runner.go:130] >       "size":  "267939",
	I1213 08:47:58.197346   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197351   47783 command_runner.go:130] >         "value":  "65535"
	I1213 08:47:58.197356   47783 command_runner.go:130] >       },
	I1213 08:47:58.197362   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197366   47783 command_runner.go:130] >       "pinned":  true
	I1213 08:47:58.197369   47783 command_runner.go:130] >     }
	I1213 08:47:58.197372   47783 command_runner.go:130] >   ]
	I1213 08:47:58.197375   47783 command_runner.go:130] > }
	I1213 08:47:58.199421   47783 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:47:58.199439   47783 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:47:58.199455   47783 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 08:47:58.199601   47783 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-074420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:47:58.199669   47783 ssh_runner.go:195] Run: sudo crictl info
	I1213 08:47:58.226280   47783 command_runner.go:130] > {
	I1213 08:47:58.226303   47783 command_runner.go:130] >   "cniconfig": {
	I1213 08:47:58.226310   47783 command_runner.go:130] >     "Networks": [
	I1213 08:47:58.226314   47783 command_runner.go:130] >       {
	I1213 08:47:58.226319   47783 command_runner.go:130] >         "Config": {
	I1213 08:47:58.226324   47783 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 08:47:58.226329   47783 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 08:47:58.226333   47783 command_runner.go:130] >           "Plugins": [
	I1213 08:47:58.226336   47783 command_runner.go:130] >             {
	I1213 08:47:58.226340   47783 command_runner.go:130] >               "Network": {
	I1213 08:47:58.226344   47783 command_runner.go:130] >                 "ipam": {},
	I1213 08:47:58.226350   47783 command_runner.go:130] >                 "type": "loopback"
	I1213 08:47:58.226358   47783 command_runner.go:130] >               },
	I1213 08:47:58.226364   47783 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 08:47:58.226371   47783 command_runner.go:130] >             }
	I1213 08:47:58.226374   47783 command_runner.go:130] >           ],
	I1213 08:47:58.226384   47783 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 08:47:58.226388   47783 command_runner.go:130] >         },
	I1213 08:47:58.226398   47783 command_runner.go:130] >         "IFName": "lo"
	I1213 08:47:58.226402   47783 command_runner.go:130] >       }
	I1213 08:47:58.226405   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226410   47783 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 08:47:58.226415   47783 command_runner.go:130] >     "PluginDirs": [
	I1213 08:47:58.226419   47783 command_runner.go:130] >       "/opt/cni/bin"
	I1213 08:47:58.226425   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226430   47783 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 08:47:58.226442   47783 command_runner.go:130] >     "Prefix": "eth"
	I1213 08:47:58.226445   47783 command_runner.go:130] >   },
	I1213 08:47:58.226448   47783 command_runner.go:130] >   "config": {
	I1213 08:47:58.226454   47783 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 08:47:58.226459   47783 command_runner.go:130] >       "/etc/cdi",
	I1213 08:47:58.226466   47783 command_runner.go:130] >       "/var/run/cdi"
	I1213 08:47:58.226472   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226480   47783 command_runner.go:130] >     "cni": {
	I1213 08:47:58.226484   47783 command_runner.go:130] >       "binDir": "",
	I1213 08:47:58.226487   47783 command_runner.go:130] >       "binDirs": [
	I1213 08:47:58.226491   47783 command_runner.go:130] >         "/opt/cni/bin"
	I1213 08:47:58.226495   47783 command_runner.go:130] >       ],
	I1213 08:47:58.226499   47783 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 08:47:58.226503   47783 command_runner.go:130] >       "confTemplate": "",
	I1213 08:47:58.226507   47783 command_runner.go:130] >       "ipPref": "",
	I1213 08:47:58.226510   47783 command_runner.go:130] >       "maxConfNum": 1,
	I1213 08:47:58.226514   47783 command_runner.go:130] >       "setupSerially": false,
	I1213 08:47:58.226519   47783 command_runner.go:130] >       "useInternalLoopback": false
	I1213 08:47:58.226524   47783 command_runner.go:130] >     },
	I1213 08:47:58.226530   47783 command_runner.go:130] >     "containerd": {
	I1213 08:47:58.226538   47783 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 08:47:58.226543   47783 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 08:47:58.226548   47783 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 08:47:58.226552   47783 command_runner.go:130] >       "runtimes": {
	I1213 08:47:58.226557   47783 command_runner.go:130] >         "runc": {
	I1213 08:47:58.226562   47783 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 08:47:58.226566   47783 command_runner.go:130] >           "PodAnnotations": null,
	I1213 08:47:58.226570   47783 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 08:47:58.226575   47783 command_runner.go:130] >           "cgroupWritable": false,
	I1213 08:47:58.226580   47783 command_runner.go:130] >           "cniConfDir": "",
	I1213 08:47:58.226586   47783 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 08:47:58.226591   47783 command_runner.go:130] >           "io_type": "",
	I1213 08:47:58.226596   47783 command_runner.go:130] >           "options": {
	I1213 08:47:58.226601   47783 command_runner.go:130] >             "BinaryName": "",
	I1213 08:47:58.226607   47783 command_runner.go:130] >             "CriuImagePath": "",
	I1213 08:47:58.226612   47783 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 08:47:58.226616   47783 command_runner.go:130] >             "IoGid": 0,
	I1213 08:47:58.226620   47783 command_runner.go:130] >             "IoUid": 0,
	I1213 08:47:58.226629   47783 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 08:47:58.226633   47783 command_runner.go:130] >             "Root": "",
	I1213 08:47:58.226641   47783 command_runner.go:130] >             "ShimCgroup": "",
	I1213 08:47:58.226649   47783 command_runner.go:130] >             "SystemdCgroup": false
	I1213 08:47:58.226652   47783 command_runner.go:130] >           },
	I1213 08:47:58.226657   47783 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 08:47:58.226666   47783 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 08:47:58.226678   47783 command_runner.go:130] >           "runtimePath": "",
	I1213 08:47:58.226683   47783 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 08:47:58.226689   47783 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 08:47:58.226698   47783 command_runner.go:130] >           "snapshotter": ""
	I1213 08:47:58.226702   47783 command_runner.go:130] >         }
	I1213 08:47:58.226705   47783 command_runner.go:130] >       }
	I1213 08:47:58.226710   47783 command_runner.go:130] >     },
	I1213 08:47:58.226721   47783 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 08:47:58.226728   47783 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 08:47:58.226735   47783 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 08:47:58.226739   47783 command_runner.go:130] >     "disableApparmor": false,
	I1213 08:47:58.226744   47783 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 08:47:58.226748   47783 command_runner.go:130] >     "disableProcMount": false,
	I1213 08:47:58.226753   47783 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 08:47:58.226759   47783 command_runner.go:130] >     "enableCDI": true,
	I1213 08:47:58.226763   47783 command_runner.go:130] >     "enableSelinux": false,
	I1213 08:47:58.226769   47783 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 08:47:58.226775   47783 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 08:47:58.226782   47783 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 08:47:58.226787   47783 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 08:47:58.226797   47783 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 08:47:58.226806   47783 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 08:47:58.226811   47783 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 08:47:58.226819   47783 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 08:47:58.226824   47783 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 08:47:58.226830   47783 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 08:47:58.226837   47783 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 08:47:58.226843   47783 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 08:47:58.226853   47783 command_runner.go:130] >   },
	I1213 08:47:58.226860   47783 command_runner.go:130] >   "features": {
	I1213 08:47:58.226865   47783 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 08:47:58.226868   47783 command_runner.go:130] >   },
	I1213 08:47:58.226872   47783 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 08:47:58.226884   47783 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 08:47:58.226898   47783 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 08:47:58.226903   47783 command_runner.go:130] >   "runtimeHandlers": [
	I1213 08:47:58.226906   47783 command_runner.go:130] >     {
	I1213 08:47:58.226910   47783 command_runner.go:130] >       "features": {
	I1213 08:47:58.226915   47783 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 08:47:58.226921   47783 command_runner.go:130] >         "user_namespaces": true
	I1213 08:47:58.226925   47783 command_runner.go:130] >       }
	I1213 08:47:58.226928   47783 command_runner.go:130] >     },
	I1213 08:47:58.226934   47783 command_runner.go:130] >     {
	I1213 08:47:58.226938   47783 command_runner.go:130] >       "features": {
	I1213 08:47:58.226946   47783 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 08:47:58.226958   47783 command_runner.go:130] >         "user_namespaces": true
	I1213 08:47:58.226962   47783 command_runner.go:130] >       },
	I1213 08:47:58.226965   47783 command_runner.go:130] >       "name": "runc"
	I1213 08:47:58.226968   47783 command_runner.go:130] >     }
	I1213 08:47:58.226971   47783 command_runner.go:130] >   ],
	I1213 08:47:58.226976   47783 command_runner.go:130] >   "status": {
	I1213 08:47:58.226984   47783 command_runner.go:130] >     "conditions": [
	I1213 08:47:58.226989   47783 command_runner.go:130] >       {
	I1213 08:47:58.226993   47783 command_runner.go:130] >         "message": "",
	I1213 08:47:58.226997   47783 command_runner.go:130] >         "reason": "",
	I1213 08:47:58.227001   47783 command_runner.go:130] >         "status": true,
	I1213 08:47:58.227009   47783 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 08:47:58.227015   47783 command_runner.go:130] >       },
	I1213 08:47:58.227019   47783 command_runner.go:130] >       {
	I1213 08:47:58.227033   47783 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 08:47:58.227038   47783 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 08:47:58.227046   47783 command_runner.go:130] >         "status": false,
	I1213 08:47:58.227054   47783 command_runner.go:130] >         "type": "NetworkReady"
	I1213 08:47:58.227057   47783 command_runner.go:130] >       },
	I1213 08:47:58.227060   47783 command_runner.go:130] >       {
	I1213 08:47:58.227083   47783 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 08:47:58.227094   47783 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 08:47:58.227100   47783 command_runner.go:130] >         "status": false,
	I1213 08:47:58.227106   47783 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 08:47:58.227111   47783 command_runner.go:130] >       }
	I1213 08:47:58.227115   47783 command_runner.go:130] >     ]
	I1213 08:47:58.227118   47783 command_runner.go:130] >   }
	I1213 08:47:58.227121   47783 command_runner.go:130] > }
	I1213 08:47:58.229345   47783 cni.go:84] Creating CNI manager for ""
	I1213 08:47:58.229369   47783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:47:58.229387   47783 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:47:58.229409   47783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-074420 NodeName:functional-074420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:47:58.229527   47783 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-074420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:47:58.229596   47783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:47:58.237061   47783 command_runner.go:130] > kubeadm
	I1213 08:47:58.237081   47783 command_runner.go:130] > kubectl
	I1213 08:47:58.237086   47783 command_runner.go:130] > kubelet
	I1213 08:47:58.237099   47783 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:47:58.237151   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:47:58.244326   47783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 08:47:58.256951   47783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:47:58.269808   47783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 08:47:58.282145   47783 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:47:58.286872   47783 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 08:47:58.287376   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:58.410199   47783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:47:59.022103   47783 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420 for IP: 192.168.49.2
	I1213 08:47:59.022125   47783 certs.go:195] generating shared ca certs ...
	I1213 08:47:59.022141   47783 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.022352   47783 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 08:47:59.022424   47783 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 08:47:59.022444   47783 certs.go:257] generating profile certs ...
	I1213 08:47:59.022584   47783 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key
	I1213 08:47:59.022699   47783 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068
	I1213 08:47:59.022768   47783 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key
	I1213 08:47:59.022808   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 08:47:59.022855   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 08:47:59.022876   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 08:47:59.022904   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 08:47:59.022937   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 08:47:59.022973   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 08:47:59.022995   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 08:47:59.023008   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 08:47:59.023095   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 08:47:59.023154   47783 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 08:47:59.023166   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:47:59.023224   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 08:47:59.023288   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:47:59.023328   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 08:47:59.023408   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:47:59.023471   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem -> /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.023492   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.023541   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.024142   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:47:59.045491   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:47:59.066181   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:47:59.087256   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 08:47:59.105383   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:47:59.122457   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:47:59.141188   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:47:59.160057   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:47:59.177518   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 08:47:59.194757   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 08:47:59.211990   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:47:59.231728   47783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:47:59.244528   47783 ssh_runner.go:195] Run: openssl version
	I1213 08:47:59.250389   47783 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 08:47:59.250777   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.258690   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 08:47:59.266115   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269715   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269750   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269798   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.310445   47783 command_runner.go:130] > 51391683
	I1213 08:47:59.310954   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:47:59.318044   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.325154   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 08:47:59.332532   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336318   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336361   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336416   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.376950   47783 command_runner.go:130] > 3ec20f2e
	I1213 08:47:59.377430   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:47:59.384916   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.392420   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:47:59.399763   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403540   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403584   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403630   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.443918   47783 command_runner.go:130] > b5213941
	I1213 08:47:59.444419   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:47:59.451702   47783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:47:59.455380   47783 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:47:59.455462   47783 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 08:47:59.455488   47783 command_runner.go:130] > Device: 259,1	Inode: 1311318     Links: 1
	I1213 08:47:59.455502   47783 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:47:59.455526   47783 command_runner.go:130] > Access: 2025-12-13 08:43:51.909308195 +0000
	I1213 08:47:59.455533   47783 command_runner.go:130] > Modify: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455538   47783 command_runner.go:130] > Change: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455544   47783 command_runner.go:130] >  Birth: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455631   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:47:59.496226   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.496712   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:47:59.538384   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.538813   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:47:59.584114   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.584598   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:47:59.624635   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.625106   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:47:59.665474   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.665947   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:47:59.706066   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.706546   47783 kubeadm.go:401] StartCluster: {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:59.706648   47783 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 08:47:59.706732   47783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:47:59.735062   47783 cri.go:89] found id: ""
	I1213 08:47:59.735134   47783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:47:59.742080   47783 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 08:47:59.742103   47783 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 08:47:59.742110   47783 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 08:47:59.743039   47783 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:47:59.743056   47783 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:47:59.743123   47783 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:47:59.750746   47783 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:47:59.751192   47783 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-074420" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.751301   47783 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "functional-074420" cluster setting kubeconfig missing "functional-074420" context setting]
	I1213 08:47:59.751688   47783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.752162   47783 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.752336   47783 kapi.go:59] client config for functional-074420: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key", CAFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:47:59.752888   47783 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 08:47:59.752908   47783 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 08:47:59.752914   47783 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 08:47:59.752919   47783 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 08:47:59.752923   47783 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 08:47:59.753010   47783 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:47:59.753251   47783 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:47:59.761240   47783 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 08:47:59.761275   47783 kubeadm.go:602] duration metric: took 18.213538ms to restartPrimaryControlPlane
	I1213 08:47:59.761286   47783 kubeadm.go:403] duration metric: took 54.748002ms to StartCluster
	I1213 08:47:59.761334   47783 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.761412   47783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.762024   47783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.762236   47783 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 08:47:59.762588   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:59.762635   47783 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 08:47:59.762697   47783 addons.go:70] Setting storage-provisioner=true in profile "functional-074420"
	I1213 08:47:59.762710   47783 addons.go:239] Setting addon storage-provisioner=true in "functional-074420"
	I1213 08:47:59.762736   47783 host.go:66] Checking if "functional-074420" exists ...
	I1213 08:47:59.762848   47783 addons.go:70] Setting default-storageclass=true in profile "functional-074420"
	I1213 08:47:59.762897   47783 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-074420"
	I1213 08:47:59.763226   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.763230   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.768637   47783 out.go:179] * Verifying Kubernetes components...
	I1213 08:47:59.771460   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:59.801964   47783 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.802130   47783 kapi.go:59] client config for functional-074420: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key", CAFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:47:59.802416   47783 addons.go:239] Setting addon default-storageclass=true in "functional-074420"
	I1213 08:47:59.802452   47783 host.go:66] Checking if "functional-074420" exists ...
	I1213 08:47:59.802879   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.817615   47783 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:47:59.820407   47783 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:47:59.820438   47783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:47:59.820510   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:59.832904   47783 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:47:59.832927   47783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:47:59.832987   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:59.858620   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:59.867019   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:48:00.019931   47783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:48:00.079586   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:00.079699   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:00.772755   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.772781   47783 node_ready.go:35] waiting up to 6m0s for node "functional-074420" to be "Ready" ...
	W1213 08:48:00.772842   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.772962   47783 retry.go:31] will retry after 342.791424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773029   47783 type.go:168] "Request Body" body=""
	I1213 08:48:00.773112   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:00.773133   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773144   47783 retry.go:31] will retry after 244.896783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:00.773482   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.019052   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.079123   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.079165   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.079186   47783 retry.go:31] will retry after 233.412949ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.116509   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:01.177616   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.181525   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.181562   47783 retry.go:31] will retry after 544.217788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.273820   47783 type.go:168] "Request Body" body=""
	I1213 08:48:01.273908   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:01.274281   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.313528   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.373257   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.376997   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.377026   47783 retry.go:31] will retry after 483.901383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.726523   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:01.774029   47783 type.go:168] "Request Body" body=""
	I1213 08:48:01.774123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:01.774536   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.788802   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.792516   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.792575   47783 retry.go:31] will retry after 627.991267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.861830   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.921846   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.925982   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.926017   47783 retry.go:31] will retry after 1.103907842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.273073   47783 type.go:168] "Request Body" body=""
	I1213 08:48:02.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:02.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:02.420977   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:02.487960   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:02.491818   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.491849   47783 retry.go:31] will retry after 452.917795ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.773092   47783 type.go:168] "Request Body" body=""
	I1213 08:48:02.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:02.773507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:02.773572   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:02.945881   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:03.009201   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:03.013021   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.013052   47783 retry.go:31] will retry after 1.276929732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.030115   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:03.100586   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:03.104547   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.104578   47783 retry.go:31] will retry after 1.048810244s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.273922   47783 type.go:168] "Request Body" body=""
	I1213 08:48:03.274012   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:03.274318   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:03.773006   47783 type.go:168] "Request Body" body=""
	I1213 08:48:03.773078   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:03.773422   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:04.153636   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:04.212539   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:04.212608   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.212632   47783 retry.go:31] will retry after 1.498415757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.273795   47783 type.go:168] "Request Body" body=""
	I1213 08:48:04.273919   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:04.274275   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:04.290503   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:04.351966   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:04.352013   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.352031   47783 retry.go:31] will retry after 2.776026758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.773561   47783 type.go:168] "Request Body" body=""
	I1213 08:48:04.773631   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:04.773950   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:04.774040   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:05.273769   47783 type.go:168] "Request Body" body=""
	I1213 08:48:05.273843   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:05.274174   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:05.711960   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:05.773532   47783 type.go:168] "Request Body" body=""
	I1213 08:48:05.773602   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:05.773904   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:05.778452   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:05.778491   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:05.778510   47783 retry.go:31] will retry after 3.257875901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:06.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:48:06.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:06.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:06.773209   47783 type.go:168] "Request Body" body=""
	I1213 08:48:06.773292   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:06.773624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:07.129286   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:07.188224   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:07.188280   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:07.188301   47783 retry.go:31] will retry after 1.575099921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:07.273578   47783 type.go:168] "Request Body" body=""
	I1213 08:48:07.273669   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:07.273988   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:07.274044   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:07.773778   47783 type.go:168] "Request Body" body=""
	I1213 08:48:07.773852   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:07.774188   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.273837   47783 type.go:168] "Request Body" body=""
	I1213 08:48:08.273926   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:08.274179   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.763743   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:08.773132   47783 type.go:168] "Request Body" body=""
	I1213 08:48:08.773211   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:08.773479   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.823924   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:08.827716   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:08.827745   47783 retry.go:31] will retry after 4.082199617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.037077   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:09.107584   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:09.107627   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.107646   47783 retry.go:31] will retry after 4.733469164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.273965   47783 type.go:168] "Request Body" body=""
	I1213 08:48:09.274042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:09.274370   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:09.274422   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:09.773216   47783 type.go:168] "Request Body" body=""
	I1213 08:48:09.773289   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:09.773540   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:10.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:48:10.273139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:10.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:10.773111   47783 type.go:168] "Request Body" body=""
	I1213 08:48:10.773192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:10.773561   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:11.272986   47783 type.go:168] "Request Body" body=""
	I1213 08:48:11.273056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:11.273307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:11.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:48:11.773117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:11.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:11.773571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:12.273226   47783 type.go:168] "Request Body" body=""
	I1213 08:48:12.273318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:12.273600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:12.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:48:12.773101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:12.773360   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:12.910787   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:12.972202   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:12.972251   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:12.972270   47783 retry.go:31] will retry after 8.911795338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:13.273667   47783 type.go:168] "Request Body" body=""
	I1213 08:48:13.273742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:13.274062   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:13.773915   47783 type.go:168] "Request Body" body=""
	I1213 08:48:13.773987   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:13.774307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:13.774364   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:13.841699   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:13.900246   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:13.900294   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:13.900313   47783 retry.go:31] will retry after 6.419298699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:14.273688   47783 type.go:168] "Request Body" body=""
	I1213 08:48:14.273763   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:14.274022   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:14.773814   47783 type.go:168] "Request Body" body=""
	I1213 08:48:14.773891   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:14.774197   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:15.273923   47783 type.go:168] "Request Body" body=""
	I1213 08:48:15.273993   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:15.274354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:15.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:48:15.773092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:15.773436   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:16.273052   47783 type.go:168] "Request Body" body=""
	I1213 08:48:16.273127   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:16.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:16.273499   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:16.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:48:16.773162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:16.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:17.272982   47783 type.go:168] "Request Body" body=""
	I1213 08:48:17.273050   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:17.273294   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:17.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:48:17.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:17.773414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:18.273126   47783 type.go:168] "Request Body" body=""
	I1213 08:48:18.273210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:18.273554   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:18.273611   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:18.773038   47783 type.go:168] "Request Body" body=""
	I1213 08:48:18.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:18.773365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:19.273077   47783 type.go:168] "Request Body" body=""
	I1213 08:48:19.273151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:19.273502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:19.773409   47783 type.go:168] "Request Body" body=""
	I1213 08:48:19.773482   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:19.773764   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:20.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:48:20.273167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:20.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:20.320652   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:20.382818   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:20.382863   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:20.382884   47783 retry.go:31] will retry after 5.774410243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:20.773290   47783 type.go:168] "Request Body" body=""
	I1213 08:48:20.773364   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:20.773699   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:20.773754   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:21.273419   47783 type.go:168] "Request Body" body=""
	I1213 08:48:21.273508   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:21.273838   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:21.773521   47783 type.go:168] "Request Body" body=""
	I1213 08:48:21.773588   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:21.773835   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:21.885194   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:21.947231   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:21.947284   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:21.947318   47783 retry.go:31] will retry after 10.220008645s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:22.273767   47783 type.go:168] "Request Body" body=""
	I1213 08:48:22.273840   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:22.274159   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:22.773949   47783 type.go:168] "Request Body" body=""
	I1213 08:48:22.774022   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:22.774282   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:22.774333   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:23.273005   47783 type.go:168] "Request Body" body=""
	I1213 08:48:23.273085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:23.273357   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:23.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:48:23.773140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:23.773454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:24.273095   47783 type.go:168] "Request Body" body=""
	I1213 08:48:24.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:24.273534   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:24.773021   47783 type.go:168] "Request Body" body=""
	I1213 08:48:24.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:24.773394   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:25.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:48:25.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:25.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:25.273549   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:25.773403   47783 type.go:168] "Request Body" body=""
	I1213 08:48:25.773494   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:25.773798   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:26.158458   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:26.211883   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:26.215285   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:26.215316   47783 retry.go:31] will retry after 15.443420543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:26.273497   47783 type.go:168] "Request Body" body=""
	I1213 08:48:26.273568   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:26.273871   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:26.773647   47783 type.go:168] "Request Body" body=""
	I1213 08:48:26.773724   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:26.774089   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:27.273764   47783 type.go:168] "Request Body" body=""
	I1213 08:48:27.273848   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:27.274199   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:27.274258   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:27.773967   47783 type.go:168] "Request Body" body=""
	I1213 08:48:27.774040   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:27.774313   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:28.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:48:28.273130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:28.273472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:28.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:48:28.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:28.773511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:29.272971   47783 type.go:168] "Request Body" body=""
	I1213 08:48:29.273039   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:29.273299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:29.773229   47783 type.go:168] "Request Body" body=""
	I1213 08:48:29.773303   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:29.773634   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:29.773690   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:30.273344   47783 type.go:168] "Request Body" body=""
	I1213 08:48:30.273424   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:30.273761   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:30.773498   47783 type.go:168] "Request Body" body=""
	I1213 08:48:30.773573   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:30.773910   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:31.273704   47783 type.go:168] "Request Body" body=""
	I1213 08:48:31.273780   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:31.274114   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:31.773932   47783 type.go:168] "Request Body" body=""
	I1213 08:48:31.774003   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:31.774336   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:31.774389   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:32.167590   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:32.226722   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:32.226762   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:32.226781   47783 retry.go:31] will retry after 8.254164246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:32.273897   47783 type.go:168] "Request Body" body=""
	I1213 08:48:32.273972   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:32.274230   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:32.772997   47783 type.go:168] "Request Body" body=""
	I1213 08:48:32.773100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:32.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:33.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:48:33.273177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:33.273513   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:33.773160   47783 type.go:168] "Request Body" body=""
	I1213 08:48:33.773250   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:33.773621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:34.273105   47783 type.go:168] "Request Body" body=""
	I1213 08:48:34.273213   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:34.273537   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:34.273589   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:34.773220   47783 type.go:168] "Request Body" body=""
	I1213 08:48:34.773295   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:34.773625   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:35.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:48:35.273116   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:35.273367   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:35.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:48:35.773150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:35.773476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:36.273986   47783 type.go:168] "Request Body" body=""
	I1213 08:48:36.274079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:36.274424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:36.274479   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:36.773035   47783 type.go:168] "Request Body" body=""
	I1213 08:48:36.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:36.773358   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:37.273041   47783 type.go:168] "Request Body" body=""
	I1213 08:48:37.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:37.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:37.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:48:37.773130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:37.773446   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:38.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:48:38.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:38.273423   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:38.773129   47783 type.go:168] "Request Body" body=""
	I1213 08:48:38.773205   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:38.773535   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:38.773618   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:39.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:48:39.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:39.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:39.773505   47783 type.go:168] "Request Body" body=""
	I1213 08:48:39.773593   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:39.773873   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:40.273652   47783 type.go:168] "Request Body" body=""
	I1213 08:48:40.273731   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:40.274095   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:40.481720   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:40.548346   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:40.548381   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:40.548399   47783 retry.go:31] will retry after 23.072803829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:40.773873   47783 type.go:168] "Request Body" body=""
	I1213 08:48:40.773944   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:40.774217   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:40.774266   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:41.273996   47783 type.go:168] "Request Body" body=""
	I1213 08:48:41.274066   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:41.274319   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:41.658979   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:41.720805   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:41.720849   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:41.720869   47783 retry.go:31] will retry after 14.236359641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:41.774005   47783 type.go:168] "Request Body" body=""
	I1213 08:48:41.774085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:41.774430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:42.273146   47783 type.go:168] "Request Body" body=""
	I1213 08:48:42.273232   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:42.273601   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:42.773152   47783 type.go:168] "Request Body" body=""
	I1213 08:48:42.773219   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:42.773482   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:43.273054   47783 type.go:168] "Request Body" body=""
	I1213 08:48:43.273159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:43.273484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:43.273541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:43.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:48:43.773275   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:43.773578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:44.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:48:44.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:44.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:44.773108   47783 type.go:168] "Request Body" body=""
	I1213 08:48:44.773180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:44.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:45.273251   47783 type.go:168] "Request Body" body=""
	I1213 08:48:45.273329   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:45.273651   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:45.273709   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:45.773677   47783 type.go:168] "Request Body" body=""
	I1213 08:48:45.773758   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:45.774018   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:46.273828   47783 type.go:168] "Request Body" body=""
	I1213 08:48:46.273897   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:46.274229   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:46.772972   47783 type.go:168] "Request Body" body=""
	I1213 08:48:46.773043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:46.773362   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:47.273045   47783 type.go:168] "Request Body" body=""
	I1213 08:48:47.273126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:47.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:47.773139   47783 type.go:168] "Request Body" body=""
	I1213 08:48:47.773219   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:47.773579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:47.773642   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:48.273287   47783 type.go:168] "Request Body" body=""
	I1213 08:48:48.273371   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:48.273677   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:48.773341   47783 type.go:168] "Request Body" body=""
	I1213 08:48:48.773416   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:48.773680   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:49.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:48:49.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:49.273506   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:49.773247   47783 type.go:168] "Request Body" body=""
	I1213 08:48:49.773323   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:49.773653   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:49.773705   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:50.273353   47783 type.go:168] "Request Body" body=""
	I1213 08:48:50.273427   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:50.273741   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:50.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:48:50.773764   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:50.774083   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:51.273888   47783 type.go:168] "Request Body" body=""
	I1213 08:48:51.273963   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:51.274309   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:51.773836   47783 type.go:168] "Request Body" body=""
	I1213 08:48:51.773911   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:51.774215   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:51.774264   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:52.273985   47783 type.go:168] "Request Body" body=""
	I1213 08:48:52.274074   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:52.274430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:52.773081   47783 type.go:168] "Request Body" body=""
	I1213 08:48:52.773168   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:52.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:53.273059   47783 type.go:168] "Request Body" body=""
	I1213 08:48:53.273129   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:53.273441   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:53.773125   47783 type.go:168] "Request Body" body=""
	I1213 08:48:53.773200   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:53.773519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:54.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:48:54.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:54.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:54.273482   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:54.773027   47783 type.go:168] "Request Body" body=""
	I1213 08:48:54.773104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:54.773358   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.273041   47783 type.go:168] "Request Body" body=""
	I1213 08:48:55.273121   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:55.273415   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.773058   47783 type.go:168] "Request Body" body=""
	I1213 08:48:55.773138   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:55.773419   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.957869   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:56.020865   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:56.020923   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:56.020946   47783 retry.go:31] will retry after 43.666748427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:56.273019   47783 type.go:168] "Request Body" body=""
	I1213 08:48:56.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:56.273421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:56.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:48:56.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:56.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:56.773548   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:57.273202   47783 type.go:168] "Request Body" body=""
	I1213 08:48:57.273273   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:57.273598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:57.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:48:57.773100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:57.773380   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:58.273085   47783 type.go:168] "Request Body" body=""
	I1213 08:48:58.273180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:58.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:58.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:48:58.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:58.773607   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:58.773665   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:59.273927   47783 type.go:168] "Request Body" body=""
	I1213 08:48:59.273999   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:59.274264   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:59.773208   47783 type.go:168] "Request Body" body=""
	I1213 08:48:59.773283   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:59.773635   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:00.273263   47783 type.go:168] "Request Body" body=""
	I1213 08:49:00.273369   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:00.273817   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:00.773836   47783 type.go:168] "Request Body" body=""
	I1213 08:49:00.773908   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:00.774222   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:00.774277   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:01.274021   47783 type.go:168] "Request Body" body=""
	I1213 08:49:01.274100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:01.274424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:01.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:49:01.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:01.773375   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:02.272965   47783 type.go:168] "Request Body" body=""
	I1213 08:49:02.273041   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:02.273365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:02.773094   47783 type.go:168] "Request Body" body=""
	I1213 08:49:02.773195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:02.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:03.273064   47783 type.go:168] "Request Body" body=""
	I1213 08:49:03.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:03.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:03.273512   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:03.622173   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:49:03.678608   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:03.682133   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:49:03.682162   47783 retry.go:31] will retry after 22.66884586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:49:03.773432   47783 type.go:168] "Request Body" body=""
	I1213 08:49:03.773502   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:03.773795   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:04.273439   47783 type.go:168] "Request Body" body=""
	I1213 08:49:04.273517   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:04.273868   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:04.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:49:04.773210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:04.773461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:05.273151   47783 type.go:168] "Request Body" body=""
	I1213 08:49:05.273221   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:05.273546   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:05.273657   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:05.773252   47783 type.go:168] "Request Body" body=""
	I1213 08:49:05.773325   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:05.773682   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:06.273974   47783 type.go:168] "Request Body" body=""
	I1213 08:49:06.274043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:06.274354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:06.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:49:06.773140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:06.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:07.273190   47783 type.go:168] "Request Body" body=""
	I1213 08:49:07.273272   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:07.273599   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:07.773017   47783 type.go:168] "Request Body" body=""
	I1213 08:49:07.773084   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:07.773327   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:07.773371   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:08.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:49:08.273139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:08.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:08.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:49:08.773269   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:08.773582   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:09.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:49:09.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:09.273483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:09.773186   47783 type.go:168] "Request Body" body=""
	I1213 08:49:09.773263   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:09.773596   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:09.773650   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:10.273086   47783 type.go:168] "Request Body" body=""
	I1213 08:49:10.273160   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:10.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:10.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:49:10.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:10.773368   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:11.273030   47783 type.go:168] "Request Body" body=""
	I1213 08:49:11.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:11.273462   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:11.773903   47783 type.go:168] "Request Body" body=""
	I1213 08:49:11.773985   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:11.774302   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:11.774358   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:12.273000   47783 type.go:168] "Request Body" body=""
	I1213 08:49:12.273072   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:12.273375   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:12.773077   47783 type.go:168] "Request Body" body=""
	I1213 08:49:12.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:12.773483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:13.273179   47783 type.go:168] "Request Body" body=""
	I1213 08:49:13.273252   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:13.273585   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:13.773267   47783 type.go:168] "Request Body" body=""
	I1213 08:49:13.773337   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:13.773602   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:14.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:49:14.273179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:14.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:14.273573   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:14.773254   47783 type.go:168] "Request Body" body=""
	I1213 08:49:14.773326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:14.773613   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:15.272991   47783 type.go:168] "Request Body" body=""
	I1213 08:49:15.273071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:15.273379   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:15.773114   47783 type.go:168] "Request Body" body=""
	I1213 08:49:15.773190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:15.773537   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:16.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:49:16.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:16.273493   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:16.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:49:16.773105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:16.773399   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:16.773452   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:17.272977   47783 type.go:168] "Request Body" body=""
	I1213 08:49:17.273056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:17.273386   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:17.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:49:17.773156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:17.773469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:18.273029   47783 type.go:168] "Request Body" body=""
	I1213 08:49:18.273098   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:18.273417   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:18.773104   47783 type.go:168] "Request Body" body=""
	I1213 08:49:18.773178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:18.773501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:18.773552   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:19.273076   47783 type.go:168] "Request Body" body=""
	I1213 08:49:19.273154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:19.273462   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:19.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:49:19.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:19.773602   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:20.273287   47783 type.go:168] "Request Body" body=""
	I1213 08:49:20.273359   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:20.273692   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:20.773504   47783 type.go:168] "Request Body" body=""
	I1213 08:49:20.773579   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:20.773879   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:20.773925   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:21.273640   47783 type.go:168] "Request Body" body=""
	I1213 08:49:21.273706   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:21.273955   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:21.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:49:21.773767   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:21.774073   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:22.273883   47783 type.go:168] "Request Body" body=""
	I1213 08:49:22.273954   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:22.274296   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:22.772994   47783 type.go:168] "Request Body" body=""
	I1213 08:49:22.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:22.773314   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:23.272983   47783 type.go:168] "Request Body" body=""
	I1213 08:49:23.273054   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:23.273387   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:23.273444   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:23.773118   47783 type.go:168] "Request Body" body=""
	I1213 08:49:23.773191   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:23.773501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:24.273058   47783 type.go:168] "Request Body" body=""
	I1213 08:49:24.273141   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:24.273459   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:24.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:49:24.773148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:24.773494   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:25.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:49:25.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:25.273476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:25.273529   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:25.773465   47783 type.go:168] "Request Body" body=""
	I1213 08:49:25.773532   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:25.773794   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:26.273591   47783 type.go:168] "Request Body" body=""
	I1213 08:49:26.273658   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:26.273978   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:26.351410   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:49:26.407043   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:26.410457   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:26.410551   47783 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:49:26.773965   47783 type.go:168] "Request Body" body=""
	I1213 08:49:26.774065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:26.774374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:27.272953   47783 type.go:168] "Request Body" body=""
	I1213 08:49:27.273025   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:27.273280   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:27.774019   47783 type.go:168] "Request Body" body=""
	I1213 08:49:27.774142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:27.774488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:27.774540   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:28.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:49:28.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:28.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:28.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:49:28.773153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:28.773437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:29.273108   47783 type.go:168] "Request Body" body=""
	I1213 08:49:29.273179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:29.273507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:29.773273   47783 type.go:168] "Request Body" body=""
	I1213 08:49:29.773361   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:29.773684   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:30.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:49:30.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:30.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:30.273487   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:30.773081   47783 type.go:168] "Request Body" body=""
	I1213 08:49:30.773180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:30.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:31.273215   47783 type.go:168] "Request Body" body=""
	I1213 08:49:31.273285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:31.273560   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:31.773891   47783 type.go:168] "Request Body" body=""
	I1213 08:49:31.773957   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:31.774211   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:32.274000   47783 type.go:168] "Request Body" body=""
	I1213 08:49:32.274087   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:32.274384   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:32.274426   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:32.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:49:32.773174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:32.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:33.273961   47783 type.go:168] "Request Body" body=""
	I1213 08:49:33.274039   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:33.274308   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:33.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:49:33.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:33.773455   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:34.273150   47783 type.go:168] "Request Body" body=""
	I1213 08:49:34.273226   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:34.273548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:34.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:49:34.773097   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:34.773360   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:34.773410   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:35.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:49:35.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:35.273467   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:35.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:49:35.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:35.773443   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:36.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:49:36.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:36.273468   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:36.773159   47783 type.go:168] "Request Body" body=""
	I1213 08:49:36.773245   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:36.773626   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:36.773678   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:37.273354   47783 type.go:168] "Request Body" body=""
	I1213 08:49:37.273429   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:37.273763   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:37.773002   47783 type.go:168] "Request Body" body=""
	I1213 08:49:37.773079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:37.773333   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:38.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:49:38.273122   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:38.273453   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:38.773090   47783 type.go:168] "Request Body" body=""
	I1213 08:49:38.773163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:38.773490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:39.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:49:39.273155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:39.273480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:39.273532   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:39.687941   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:49:39.742570   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:39.746037   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:39.746134   47783 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:49:39.749234   47783 out.go:179] * Enabled addons: 
	I1213 08:49:39.751225   47783 addons.go:530] duration metric: took 1m39.988589749s for enable addons: enabled=[]
	I1213 08:49:39.773343   47783 type.go:168] "Request Body" body=""
	I1213 08:49:39.773416   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:39.773726   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:40.273095   47783 type.go:168] "Request Body" body=""
	I1213 08:49:40.273171   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:40.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:40.773067   47783 type.go:168] "Request Body" body=""
	I1213 08:49:40.773147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:40.773426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:41.273150   47783 type.go:168] "Request Body" body=""
	I1213 08:49:41.273226   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:41.273552   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:41.273610   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:41.773119   47783 type.go:168] "Request Body" body=""
	I1213 08:49:41.773205   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:41.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:42.278526   47783 type.go:168] "Request Body" body=""
	I1213 08:49:42.278597   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:42.279069   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:42.773238   47783 type.go:168] "Request Body" body=""
	I1213 08:49:42.773329   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:42.773690   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:43.273405   47783 type.go:168] "Request Body" body=""
	I1213 08:49:43.273484   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:43.273806   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:43.273861   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:43.773568   47783 type.go:168] "Request Body" body=""
	I1213 08:49:43.773638   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:43.773886   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:44.273754   47783 type.go:168] "Request Body" body=""
	I1213 08:49:44.273826   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:44.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:44.773918   47783 type.go:168] "Request Body" body=""
	I1213 08:49:44.773997   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:44.774334   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:45.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:49:45.273137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:45.273511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:45.773242   47783 type.go:168] "Request Body" body=""
	I1213 08:49:45.773316   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:45.773611   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:45.773658   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:46.273330   47783 type.go:168] "Request Body" body=""
	I1213 08:49:46.273412   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:46.273751   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:46.773437   47783 type.go:168] "Request Body" body=""
	I1213 08:49:46.773504   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:46.773768   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:47.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:49:47.273144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:47.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:47.773097   47783 type.go:168] "Request Body" body=""
	I1213 08:49:47.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:47.773505   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:48.273049   47783 type.go:168] "Request Body" body=""
	I1213 08:49:48.273125   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:48.273438   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:48.273501   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:48.773147   47783 type.go:168] "Request Body" body=""
	I1213 08:49:48.773223   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:48.773560   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:49.273284   47783 type.go:168] "Request Body" body=""
	I1213 08:49:49.273357   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:49.273664   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:49.773223   47783 type.go:168] "Request Body" body=""
	I1213 08:49:49.773311   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:49.773649   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:50.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:49:50.273201   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:50.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:50.273544   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:50.773094   47783 type.go:168] "Request Body" body=""
	I1213 08:49:50.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:50.773510   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:51.273186   47783 type.go:168] "Request Body" body=""
	I1213 08:49:51.273280   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:51.273547   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:51.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:49:51.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:51.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:52.273068   47783 type.go:168] "Request Body" body=""
	I1213 08:49:52.273164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:52.273473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:52.773028   47783 type.go:168] "Request Body" body=""
	I1213 08:49:52.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:52.773409   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:52.773465   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:53.273131   47783 type.go:168] "Request Body" body=""
	I1213 08:49:53.273206   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:53.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:53.773165   47783 type.go:168] "Request Body" body=""
	I1213 08:49:53.773244   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:53.773580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:54.273021   47783 type.go:168] "Request Body" body=""
	I1213 08:49:54.273089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:54.273345   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:54.773054   47783 type.go:168] "Request Body" body=""
	I1213 08:49:54.773137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:54.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:54.773578   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:55.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:49:55.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:55.273699   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:55.773529   47783 type.go:168] "Request Body" body=""
	I1213 08:49:55.773602   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:55.773850   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:56.273641   47783 type.go:168] "Request Body" body=""
	I1213 08:49:56.273732   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:56.274042   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:56.773818   47783 type.go:168] "Request Body" body=""
	I1213 08:49:56.773897   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:56.774220   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:56.774270   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:57.273975   47783 type.go:168] "Request Body" body=""
	I1213 08:49:57.274045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:57.274295   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:57.772996   47783 type.go:168] "Request Body" body=""
	I1213 08:49:57.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:57.773405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:58.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:49:58.273173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:58.273494   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:58.773156   47783 type.go:168] "Request Body" body=""
	I1213 08:49:58.773264   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:58.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:59.273323   47783 type.go:168] "Request Body" body=""
	I1213 08:49:59.273404   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:59.273727   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:59.273784   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:59.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:49:59.773775   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:59.774108   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:00.297017   47783 type.go:168] "Request Body" body=""
	I1213 08:50:00.297166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:00.297535   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:00.773617   47783 type.go:168] "Request Body" body=""
	I1213 08:50:00.773751   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:00.774118   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:01.273864   47783 type.go:168] "Request Body" body=""
	I1213 08:50:01.273935   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:01.274197   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:01.274238   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:01.773963   47783 type.go:168] "Request Body" body=""
	I1213 08:50:01.774042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:01.774389   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:02.273108   47783 type.go:168] "Request Body" body=""
	I1213 08:50:02.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:02.273568   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:02.773279   47783 type.go:168] "Request Body" body=""
	I1213 08:50:02.773363   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:02.773686   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:03.273398   47783 type.go:168] "Request Body" body=""
	I1213 08:50:03.273474   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:03.273787   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:03.773099   47783 type.go:168] "Request Body" body=""
	I1213 08:50:03.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:03.773516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:03.773571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:04.273225   47783 type.go:168] "Request Body" body=""
	I1213 08:50:04.273313   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:04.273579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:04.773096   47783 type.go:168] "Request Body" body=""
	I1213 08:50:04.773170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:04.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:05.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:50:05.273153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:05.273466   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:05.773045   47783 type.go:168] "Request Body" body=""
	I1213 08:50:05.773131   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:05.773429   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:06.273084   47783 type.go:168] "Request Body" body=""
	I1213 08:50:06.273161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:06.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:06.273553   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:06.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:50:06.773112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:06.773416   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:07.272971   47783 type.go:168] "Request Body" body=""
	I1213 08:50:07.273045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:07.273298   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:07.773048   47783 type.go:168] "Request Body" body=""
	I1213 08:50:07.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:07.773427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:08.273096   47783 type.go:168] "Request Body" body=""
	I1213 08:50:08.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:08.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:08.773055   47783 type.go:168] "Request Body" body=""
	I1213 08:50:08.773142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:08.773458   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:08.773523   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:09.273174   47783 type.go:168] "Request Body" body=""
	I1213 08:50:09.273248   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:09.273572   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:09.773553   47783 type.go:168] "Request Body" body=""
	I1213 08:50:09.773632   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:09.773992   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:10.273723   47783 type.go:168] "Request Body" body=""
	I1213 08:50:10.273814   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:10.274202   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:10.773083   47783 type.go:168] "Request Body" body=""
	I1213 08:50:10.773168   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:10.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:10.773551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:11.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:50:11.273271   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:11.273600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:11.772991   47783 type.go:168] "Request Body" body=""
	I1213 08:50:11.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:11.773355   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:12.273063   47783 type.go:168] "Request Body" body=""
	I1213 08:50:12.273145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:12.273477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:12.773181   47783 type.go:168] "Request Body" body=""
	I1213 08:50:12.773261   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:12.773608   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:12.773666   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:13.273258   47783 type.go:168] "Request Body" body=""
	I1213 08:50:13.273334   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:13.273612   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:13.773147   47783 type.go:168] "Request Body" body=""
	I1213 08:50:13.773220   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:13.773548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:14.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:50:14.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:14.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:14.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:50:14.773121   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:14.773402   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:15.273165   47783 type.go:168] "Request Body" body=""
	I1213 08:50:15.273259   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:15.273603   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:15.273662   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:15.773104   47783 type.go:168] "Request Body" body=""
	I1213 08:50:15.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:15.773492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:16.273005   47783 type.go:168] "Request Body" body=""
	I1213 08:50:16.273080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:16.273324   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:16.773018   47783 type.go:168] "Request Body" body=""
	I1213 08:50:16.773092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:16.773427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:17.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:17.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:17.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:17.773161   47783 type.go:168] "Request Body" body=""
	I1213 08:50:17.773232   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:17.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:17.773616   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:18.273100   47783 type.go:168] "Request Body" body=""
	I1213 08:50:18.273173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:18.273458   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:18.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:50:18.773177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:18.773525   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:19.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:50:19.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:19.273451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:19.773113   47783 type.go:168] "Request Body" body=""
	I1213 08:50:19.773187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:19.773533   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:20.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:50:20.273111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:20.273432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:20.273484   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:20.773059   47783 type.go:168] "Request Body" body=""
	I1213 08:50:20.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:20.773472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:21.273123   47783 type.go:168] "Request Body" body=""
	I1213 08:50:21.273195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:21.273492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:21.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:50:21.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:21.773514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:22.273055   47783 type.go:168] "Request Body" body=""
	I1213 08:50:22.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:22.273427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:22.773171   47783 type.go:168] "Request Body" body=""
	I1213 08:50:22.773255   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:22.773587   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:22.773638   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:23.273116   47783 type.go:168] "Request Body" body=""
	I1213 08:50:23.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:23.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:23.773176   47783 type.go:168] "Request Body" body=""
	I1213 08:50:23.773242   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:23.773565   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:24.273113   47783 type.go:168] "Request Body" body=""
	I1213 08:50:24.273208   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:24.273526   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:24.773266   47783 type.go:168] "Request Body" body=""
	I1213 08:50:24.773335   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:24.773645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:24.773691   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:25.273185   47783 type.go:168] "Request Body" body=""
	I1213 08:50:25.273256   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:25.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:25.773486   47783 type.go:168] "Request Body" body=""
	I1213 08:50:25.773580   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:25.773863   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:26.273638   47783 type.go:168] "Request Body" body=""
	I1213 08:50:26.273722   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:26.274046   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:26.773772   47783 type.go:168] "Request Body" body=""
	I1213 08:50:26.773849   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:26.774110   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:26.774158   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:27.273949   47783 type.go:168] "Request Body" body=""
	I1213 08:50:27.274035   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:27.274426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:27.773079   47783 type.go:168] "Request Body" body=""
	I1213 08:50:27.773157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:27.773492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:28.273060   47783 type.go:168] "Request Body" body=""
	I1213 08:50:28.273130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:28.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:28.773138   47783 type.go:168] "Request Body" body=""
	I1213 08:50:28.773213   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:28.773534   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:29.273243   47783 type.go:168] "Request Body" body=""
	I1213 08:50:29.273332   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:29.273667   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:29.273721   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:29.773419   47783 type.go:168] "Request Body" body=""
	I1213 08:50:29.773492   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:29.773757   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:30.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:50:30.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:30.273528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:30.773268   47783 type.go:168] "Request Body" body=""
	I1213 08:50:30.773342   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:30.773681   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:31.273053   47783 type.go:168] "Request Body" body=""
	I1213 08:50:31.273131   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:31.273450   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:31.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:50:31.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:31.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:31.773554   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:32.273178   47783 type.go:168] "Request Body" body=""
	I1213 08:50:32.273254   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:32.273550   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:32.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:50:32.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:32.773420   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:33.273126   47783 type.go:168] "Request Body" body=""
	I1213 08:50:33.273197   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:33.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:33.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:50:33.773311   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:33.773638   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:33.773693   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:34.273120   47783 type.go:168] "Request Body" body=""
	I1213 08:50:34.273192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:34.273450   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:34.773092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:34.773162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:34.773483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:35.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:50:35.273285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:35.273659   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:35.773467   47783 type.go:168] "Request Body" body=""
	I1213 08:50:35.773539   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:35.773795   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:35.773847   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:36.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:36.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:36.273501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:36.773198   47783 type.go:168] "Request Body" body=""
	I1213 08:50:36.773272   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:36.773631   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:37.273322   47783 type.go:168] "Request Body" body=""
	I1213 08:50:37.273394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:37.273649   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:37.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:50:37.773139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:37.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:38.273158   47783 type.go:168] "Request Body" body=""
	I1213 08:50:38.273235   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:38.273563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:38.273617   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:38.773952   47783 type.go:168] "Request Body" body=""
	I1213 08:50:38.774033   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:38.774277   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:39.272962   47783 type.go:168] "Request Body" body=""
	I1213 08:50:39.273042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:39.273376   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:39.773154   47783 type.go:168] "Request Body" body=""
	I1213 08:50:39.773228   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:39.773559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:40.273228   47783 type.go:168] "Request Body" body=""
	I1213 08:50:40.273301   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:40.273580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:40.773572   47783 type.go:168] "Request Body" body=""
	I1213 08:50:40.773643   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:40.773972   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:40.774022   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:41.273745   47783 type.go:168] "Request Body" body=""
	I1213 08:50:41.273822   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:41.274145   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:41.773824   47783 type.go:168] "Request Body" body=""
	I1213 08:50:41.773902   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:41.774153   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:42.273992   47783 type.go:168] "Request Body" body=""
	I1213 08:50:42.274071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:42.274419   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:42.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:50:42.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:42.773457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:43.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:50:43.273175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:43.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:43.273477   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:43.773102   47783 type.go:168] "Request Body" body=""
	I1213 08:50:43.773177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:43.773521   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:44.273220   47783 type.go:168] "Request Body" body=""
	I1213 08:50:44.273315   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:44.273660   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:44.773011   47783 type.go:168] "Request Body" body=""
	I1213 08:50:44.773097   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:44.773359   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:45.273045   47783 type.go:168] "Request Body" body=""
	I1213 08:50:45.273133   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:45.273474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:45.273530   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:45.773107   47783 type.go:168] "Request Body" body=""
	I1213 08:50:45.773184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:45.773544   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:46.273227   47783 type.go:168] "Request Body" body=""
	I1213 08:50:46.273297   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:46.273559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:46.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:50:46.773135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:46.773469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:47.273151   47783 type.go:168] "Request Body" body=""
	I1213 08:50:47.273234   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:47.273574   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:47.273628   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:47.773031   47783 type.go:168] "Request Body" body=""
	I1213 08:50:47.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:47.773365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:48.273067   47783 type.go:168] "Request Body" body=""
	I1213 08:50:48.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:48.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:48.773210   47783 type.go:168] "Request Body" body=""
	I1213 08:50:48.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:48.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:49.272997   47783 type.go:168] "Request Body" body=""
	I1213 08:50:49.273065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:49.273322   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:49.773187   47783 type.go:168] "Request Body" body=""
	I1213 08:50:49.773256   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:49.773595   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:49.773649   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:50.273317   47783 type.go:168] "Request Body" body=""
	I1213 08:50:50.273391   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:50.273716   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:50.773448   47783 type.go:168] "Request Body" body=""
	I1213 08:50:50.773521   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:50.773785   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:51.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:50:51.273153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:51.273469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:51.773185   47783 type.go:168] "Request Body" body=""
	I1213 08:50:51.773266   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:51.773606   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:52.273034   47783 type.go:168] "Request Body" body=""
	I1213 08:50:52.273104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:52.273399   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:52.273451   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:52.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:50:52.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:52.773490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:53.273950   47783 type.go:168] "Request Body" body=""
	I1213 08:50:53.274028   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:53.274337   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:53.772992   47783 type.go:168] "Request Body" body=""
	I1213 08:50:53.773065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:53.773378   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:54.272967   47783 type.go:168] "Request Body" body=""
	I1213 08:50:54.273044   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:54.273396   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:54.773129   47783 type.go:168] "Request Body" body=""
	I1213 08:50:54.773203   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:54.773528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:54.773588   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:55.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:50:55.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:55.273415   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:55.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:50:55.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:55.773463   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:56.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:50:56.273156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:56.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:56.772978   47783 type.go:168] "Request Body" body=""
	I1213 08:50:56.773045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:56.773290   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:57.272974   47783 type.go:168] "Request Body" body=""
	I1213 08:50:57.273048   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:57.273374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:57.273428   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:57.773002   47783 type.go:168] "Request Body" body=""
	I1213 08:50:57.773088   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:57.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:58.273022   47783 type.go:168] "Request Body" body=""
	I1213 08:50:58.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:58.273391   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:58.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:50:58.773196   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:58.773519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:59.273246   47783 type.go:168] "Request Body" body=""
	I1213 08:50:59.273324   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:59.273653   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:59.273705   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:59.773429   47783 type.go:168] "Request Body" body=""
	I1213 08:50:59.773497   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:59.773750   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:00.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:51:00.273284   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:00.273609   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:00.773677   47783 type.go:168] "Request Body" body=""
	I1213 08:51:00.773767   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:00.774124   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:01.273906   47783 type.go:168] "Request Body" body=""
	I1213 08:51:01.273981   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:01.274248   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:01.274298   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:01.772982   47783 type.go:168] "Request Body" body=""
	I1213 08:51:01.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:01.773396   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:02.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:51:02.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:02.273510   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:02.773051   47783 type.go:168] "Request Body" body=""
	I1213 08:51:02.773122   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:02.773441   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:03.273077   47783 type.go:168] "Request Body" body=""
	I1213 08:51:03.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:03.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:03.773197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:03.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:03.773610   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:03.773666   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:04.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:51:04.273145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:04.273469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:04.773150   47783 type.go:168] "Request Body" body=""
	I1213 08:51:04.773225   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:04.773542   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:05.273267   47783 type.go:168] "Request Body" body=""
	I1213 08:51:05.273342   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:05.273694   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:05.773534   47783 type.go:168] "Request Body" body=""
	I1213 08:51:05.773600   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:05.773861   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:05.773901   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:06.273627   47783 type.go:168] "Request Body" body=""
	I1213 08:51:06.273698   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:06.273995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:06.773786   47783 type.go:168] "Request Body" body=""
	I1213 08:51:06.773858   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:06.774165   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:07.273877   47783 type.go:168] "Request Body" body=""
	I1213 08:51:07.273959   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:07.274221   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:07.774001   47783 type.go:168] "Request Body" body=""
	I1213 08:51:07.774080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:07.774408   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:07.774461   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:08.273096   47783 type.go:168] "Request Body" body=""
	I1213 08:51:08.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:08.273511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:08.773065   47783 type.go:168] "Request Body" body=""
	I1213 08:51:08.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:08.773665   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:09.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:51:09.273148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:09.273474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:09.773196   47783 type.go:168] "Request Body" body=""
	I1213 08:51:09.773269   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:09.773600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:10.273256   47783 type.go:168] "Request Body" body=""
	I1213 08:51:10.273325   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:10.273572   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:10.273611   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:10.773469   47783 type.go:168] "Request Body" body=""
	I1213 08:51:10.773540   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:10.773888   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:11.273704   47783 type.go:168] "Request Body" body=""
	I1213 08:51:11.273777   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:11.274082   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:11.773860   47783 type.go:168] "Request Body" body=""
	I1213 08:51:11.773926   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:11.774166   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:12.273909   47783 type.go:168] "Request Body" body=""
	I1213 08:51:12.273991   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:12.274305   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:12.274363   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:12.773010   47783 type.go:168] "Request Body" body=""
	I1213 08:51:12.773086   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:12.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:13.272974   47783 type.go:168] "Request Body" body=""
	I1213 08:51:13.273048   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:13.273299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:13.774032   47783 type.go:168] "Request Body" body=""
	I1213 08:51:13.774103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:13.774412   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:14.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:51:14.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:14.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:14.773021   47783 type.go:168] "Request Body" body=""
	I1213 08:51:14.773090   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:14.773394   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:14.773446   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:15.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:51:15.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:15.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:15.773082   47783 type.go:168] "Request Body" body=""
	I1213 08:51:15.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:15.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:16.273178   47783 type.go:168] "Request Body" body=""
	I1213 08:51:16.273248   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:16.273492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:16.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:16.773179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:16.773512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:16.773564   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:17.273224   47783 type.go:168] "Request Body" body=""
	I1213 08:51:17.273307   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:17.273601   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:17.773254   47783 type.go:168] "Request Body" body=""
	I1213 08:51:17.773319   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:17.773610   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:18.273315   47783 type.go:168] "Request Body" body=""
	I1213 08:51:18.273399   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:18.273714   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:18.773100   47783 type.go:168] "Request Body" body=""
	I1213 08:51:18.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:18.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:18.773613   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:19.273057   47783 type.go:168] "Request Body" body=""
	I1213 08:51:19.273134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:19.273422   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:19.773326   47783 type.go:168] "Request Body" body=""
	I1213 08:51:19.773408   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:19.773817   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:20.273652   47783 type.go:168] "Request Body" body=""
	I1213 08:51:20.273726   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:20.274023   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:20.773908   47783 type.go:168] "Request Body" body=""
	I1213 08:51:20.773981   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:20.774242   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:20.774291   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:21.272987   47783 type.go:168] "Request Body" body=""
	I1213 08:51:21.273071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:21.273418   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:21.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:51:21.773224   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:21.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:22.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:51:22.273110   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:22.273385   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:22.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:22.773167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:22.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:23.273121   47783 type.go:168] "Request Body" body=""
	I1213 08:51:23.273201   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:23.273516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:23.273571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:23.773217   47783 type.go:168] "Request Body" body=""
	I1213 08:51:23.773286   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:23.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:24.273100   47783 type.go:168] "Request Body" body=""
	I1213 08:51:24.273174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:24.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:24.773255   47783 type.go:168] "Request Body" body=""
	I1213 08:51:24.773333   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:24.773667   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:25.273326   47783 type.go:168] "Request Body" body=""
	I1213 08:51:25.273394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:25.273641   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:25.273680   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:25.773681   47783 type.go:168] "Request Body" body=""
	I1213 08:51:25.773757   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:25.774066   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:26.273874   47783 type.go:168] "Request Body" body=""
	I1213 08:51:26.273947   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:26.274273   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:26.772979   47783 type.go:168] "Request Body" body=""
	I1213 08:51:26.773047   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:26.773307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:27.273109   47783 type.go:168] "Request Body" body=""
	I1213 08:51:27.273206   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:27.273597   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:27.773321   47783 type.go:168] "Request Body" body=""
	I1213 08:51:27.773394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:27.773719   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:27.773778   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:28.273019   47783 type.go:168] "Request Body" body=""
	I1213 08:51:28.273092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:28.273421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:28.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:51:28.773172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:28.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:29.273187   47783 type.go:168] "Request Body" body=""
	I1213 08:51:29.273261   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:29.273583   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:29.773457   47783 type.go:168] "Request Body" body=""
	I1213 08:51:29.773543   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:29.773812   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:29.773863   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:30.273575   47783 type.go:168] "Request Body" body=""
	I1213 08:51:30.273663   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:30.273957   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:30.773908   47783 type.go:168] "Request Body" body=""
	I1213 08:51:30.773991   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:30.774329   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:31.273977   47783 type.go:168] "Request Body" body=""
	I1213 08:51:31.274058   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:31.274309   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:31.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:51:31.773126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:31.773428   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:32.273147   47783 type.go:168] "Request Body" body=""
	I1213 08:51:32.273218   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:32.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:32.273546   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:32.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:51:32.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:32.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:33.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:51:33.273171   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:33.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:33.773083   47783 type.go:168] "Request Body" body=""
	I1213 08:51:33.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:33.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:34.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:51:34.273124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:34.273384   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:34.773086   47783 type.go:168] "Request Body" body=""
	I1213 08:51:34.773216   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:34.773545   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:34.773602   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:35.273264   47783 type.go:168] "Request Body" body=""
	I1213 08:51:35.273339   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:35.273673   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:35.773675   47783 type.go:168] "Request Body" body=""
	I1213 08:51:35.773742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:35.773995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:36.273812   47783 type.go:168] "Request Body" body=""
	I1213 08:51:36.273886   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:36.274208   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:36.773984   47783 type.go:168] "Request Body" body=""
	I1213 08:51:36.774056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:36.774351   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:36.774398   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:37.273032   47783 type.go:168] "Request Body" body=""
	I1213 08:51:37.273100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:37.273364   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:37.773071   47783 type.go:168] "Request Body" body=""
	I1213 08:51:37.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:37.773499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:38.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:51:38.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:38.273480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:38.773071   47783 type.go:168] "Request Body" body=""
	I1213 08:51:38.773150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:38.773445   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:39.273066   47783 type.go:168] "Request Body" body=""
	I1213 08:51:39.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:39.273476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:39.273529   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:39.773199   47783 type.go:168] "Request Body" body=""
	I1213 08:51:39.773278   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:39.773580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:40.273030   47783 type.go:168] "Request Body" body=""
	I1213 08:51:40.273100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:40.273392   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:40.773204   47783 type.go:168] "Request Body" body=""
	I1213 08:51:40.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:40.773647   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:41.273380   47783 type.go:168] "Request Body" body=""
	I1213 08:51:41.273453   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:41.273801   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:41.273854   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:41.773592   47783 type.go:168] "Request Body" body=""
	I1213 08:51:41.773662   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:41.773929   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:42.273710   47783 type.go:168] "Request Body" body=""
	I1213 08:51:42.273789   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:42.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:42.773753   47783 type.go:168] "Request Body" body=""
	I1213 08:51:42.773827   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:42.774140   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:43.273914   47783 type.go:168] "Request Body" body=""
	I1213 08:51:43.273992   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:43.274262   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:43.274314   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:43.774012   47783 type.go:168] "Request Body" body=""
	I1213 08:51:43.774089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:43.774435   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:44.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:51:44.273110   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:44.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:44.773038   47783 type.go:168] "Request Body" body=""
	I1213 08:51:44.773145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:44.773485   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:45.273088   47783 type.go:168] "Request Body" body=""
	I1213 08:51:45.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:45.273517   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:45.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:51:45.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:45.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:45.773548   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:46.273137   47783 type.go:168] "Request Body" body=""
	I1213 08:51:46.273200   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:46.273447   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:46.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:51:46.773198   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:46.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:47.273202   47783 type.go:168] "Request Body" body=""
	I1213 08:51:47.273291   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:47.273588   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:47.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:51:47.773134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:47.773395   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:48.273117   47783 type.go:168] "Request Body" body=""
	I1213 08:51:48.273197   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:48.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:48.273569   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:48.773253   47783 type.go:168] "Request Body" body=""
	I1213 08:51:48.773339   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:48.773688   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:49.273385   47783 type.go:168] "Request Body" body=""
	I1213 08:51:49.273462   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:49.273726   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:49.773610   47783 type.go:168] "Request Body" body=""
	I1213 08:51:49.773679   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:49.773995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:50.273790   47783 type.go:168] "Request Body" body=""
	I1213 08:51:50.273866   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:50.274187   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:50.274243   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:50.773061   47783 type.go:168] "Request Body" body=""
	I1213 08:51:50.773136   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:50.773466   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:51.273093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:51.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:51.273471   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:51.773060   47783 type.go:168] "Request Body" body=""
	I1213 08:51:51.773138   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:51.773445   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:52.273028   47783 type.go:168] "Request Body" body=""
	I1213 08:51:52.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:52.273335   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:52.773020   47783 type.go:168] "Request Body" body=""
	I1213 08:51:52.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:52.773428   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:52.773493   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:53.273197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:53.273277   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:53.273626   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:53.773306   47783 type.go:168] "Request Body" body=""
	I1213 08:51:53.773373   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:53.773624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:54.273103   47783 type.go:168] "Request Body" body=""
	I1213 08:51:54.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:54.273587   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:54.773074   47783 type.go:168] "Request Body" body=""
	I1213 08:51:54.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:54.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:54.773556   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:55.273197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:55.273270   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:55.273520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:55.773075   47783 type.go:168] "Request Body" body=""
	I1213 08:51:55.773149   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:55.773705   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:56.273383   47783 type.go:168] "Request Body" body=""
	I1213 08:51:56.273474   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:56.274426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:56.772963   47783 type.go:168] "Request Body" body=""
	I1213 08:51:56.773031   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:56.773288   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:57.272979   47783 type.go:168] "Request Body" body=""
	I1213 08:51:57.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:57.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:57.273481   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:57.773116   47783 type.go:168] "Request Body" body=""
	I1213 08:51:57.773193   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:57.773526   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:58.273042   47783 type.go:168] "Request Body" body=""
	I1213 08:51:58.273108   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:58.273456   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:58.773114   47783 type.go:168] "Request Body" body=""
	I1213 08:51:58.773188   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:58.773503   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:59.273098   47783 type.go:168] "Request Body" body=""
	I1213 08:51:59.273220   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:59.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:59.273585   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:59.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:51:59.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:59.773540   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:00.273530   47783 type.go:168] "Request Body" body=""
	I1213 08:52:00.273627   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:00.274021   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:00.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:52:00.773167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:00.773522   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:01.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:52:01.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:01.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:01.773057   47783 type.go:168] "Request Body" body=""
	I1213 08:52:01.773134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:01.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:01.773527   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:02.273229   47783 type.go:168] "Request Body" body=""
	I1213 08:52:02.273310   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:02.273637   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:02.773212   47783 type.go:168] "Request Body" body=""
	I1213 08:52:02.773280   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:02.773548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:03.273115   47783 type.go:168] "Request Body" body=""
	I1213 08:52:03.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:03.273565   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:03.773267   47783 type.go:168] "Request Body" body=""
	I1213 08:52:03.773352   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:03.773695   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:03.773772   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:04.273021   47783 type.go:168] "Request Body" body=""
	I1213 08:52:04.273090   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:04.273426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:04.773087   47783 type.go:168] "Request Body" body=""
	I1213 08:52:04.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:04.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:05.273162   47783 type.go:168] "Request Body" body=""
	I1213 08:52:05.273242   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:05.273562   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:05.773515   47783 type.go:168] "Request Body" body=""
	I1213 08:52:05.773585   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:05.773843   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:05.773884   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:06.273680   47783 type.go:168] "Request Body" body=""
	I1213 08:52:06.273755   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:06.274063   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:06.773854   47783 type.go:168] "Request Body" body=""
	I1213 08:52:06.773929   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:06.774259   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:07.274012   47783 type.go:168] "Request Body" body=""
	I1213 08:52:07.274080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:07.274341   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:07.773027   47783 type.go:168] "Request Body" body=""
	I1213 08:52:07.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:07.773425   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:08.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:52:08.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:08.273531   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:08.273588   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:08.773079   47783 type.go:168] "Request Body" body=""
	I1213 08:52:08.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:08.773463   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:09.273105   47783 type.go:168] "Request Body" body=""
	I1213 08:52:09.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:09.273541   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:09.773271   47783 type.go:168] "Request Body" body=""
	I1213 08:52:09.773343   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:09.773683   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:10.273235   47783 type.go:168] "Request Body" body=""
	I1213 08:52:10.273308   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:10.273623   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:10.273686   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:10.773580   47783 type.go:168] "Request Body" body=""
	I1213 08:52:10.773661   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:10.773990   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:11.273663   47783 type.go:168] "Request Body" body=""
	I1213 08:52:11.273742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:11.274065   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:11.773833   47783 type.go:168] "Request Body" body=""
	I1213 08:52:11.773902   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:11.774166   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:12.273942   47783 type.go:168] "Request Body" body=""
	I1213 08:52:12.274016   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:12.274348   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:12.274403   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:12.774024   47783 type.go:168] "Request Body" body=""
	I1213 08:52:12.774098   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:12.774431   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:13.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:13.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:13.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:13.773196   47783 type.go:168] "Request Body" body=""
	I1213 08:52:13.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:13.773598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:14.273324   47783 type.go:168] "Request Body" body=""
	I1213 08:52:14.273404   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:14.273738   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:14.773419   47783 type.go:168] "Request Body" body=""
	I1213 08:52:14.773491   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:14.773809   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:14.773870   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:15.273604   47783 type.go:168] "Request Body" body=""
	I1213 08:52:15.273678   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:15.273970   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:15.773929   47783 type.go:168] "Request Body" body=""
	I1213 08:52:15.774005   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:15.774334   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:16.273966   47783 type.go:168] "Request Body" body=""
	I1213 08:52:16.274045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:16.274328   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:16.773036   47783 type.go:168] "Request Body" body=""
	I1213 08:52:16.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:16.773432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:17.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:52:17.273192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:17.273578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:17.273639   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:17.772950   47783 type.go:168] "Request Body" body=""
	I1213 08:52:17.773025   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:17.773278   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:18.274027   47783 type.go:168] "Request Body" body=""
	I1213 08:52:18.274101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:18.274430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:18.772996   47783 type.go:168] "Request Body" body=""
	I1213 08:52:18.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:18.773402   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:19.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:52:19.273152   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:19.273456   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:19.773356   47783 type.go:168] "Request Body" body=""
	I1213 08:52:19.773432   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:19.773764   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:19.773818   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:20.273485   47783 type.go:168] "Request Body" body=""
	I1213 08:52:20.273567   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:20.273890   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:20.773865   47783 type.go:168] "Request Body" body=""
	I1213 08:52:20.773932   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:20.774231   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:21.273999   47783 type.go:168] "Request Body" body=""
	I1213 08:52:21.274069   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:21.274395   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:21.772998   47783 type.go:168] "Request Body" body=""
	I1213 08:52:21.773076   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:21.773386   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:22.273028   47783 type.go:168] "Request Body" body=""
	I1213 08:52:22.273101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:22.273413   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:22.273462   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:22.773146   47783 type.go:168] "Request Body" body=""
	I1213 08:52:22.773221   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:22.773559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:23.273239   47783 type.go:168] "Request Body" body=""
	I1213 08:52:23.273317   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:23.273630   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:23.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:52:23.773089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:23.773346   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:24.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:52:24.273175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:24.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:24.273551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:24.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:52:24.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:24.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:25.273040   47783 type.go:168] "Request Body" body=""
	I1213 08:52:25.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:25.273374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:25.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:52:25.773304   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:25.773625   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:26.273113   47783 type.go:168] "Request Body" body=""
	I1213 08:52:26.273187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:26.273523   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:26.273583   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:26.773246   47783 type.go:168] "Request Body" body=""
	I1213 08:52:26.773317   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:26.773577   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:27.273252   47783 type.go:168] "Request Body" body=""
	I1213 08:52:27.273326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:27.273645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:27.773350   47783 type.go:168] "Request Body" body=""
	I1213 08:52:27.773425   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:27.773742   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:28.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:52:28.273167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:28.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:28.773080   47783 type.go:168] "Request Body" body=""
	I1213 08:52:28.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:28.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:28.773541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:29.273204   47783 type.go:168] "Request Body" body=""
	I1213 08:52:29.273284   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:29.273624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:29.773318   47783 type.go:168] "Request Body" body=""
	I1213 08:52:29.773387   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:29.773650   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:30.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:30.273174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:30.273487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:30.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:52:30.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:30.773484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:31.273046   47783 type.go:168] "Request Body" body=""
	I1213 08:52:31.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:31.273373   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:31.273414   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:31.773116   47783 type.go:168] "Request Body" body=""
	I1213 08:52:31.773190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:31.773520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:32.273103   47783 type.go:168] "Request Body" body=""
	I1213 08:52:32.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:32.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:32.773017   47783 type.go:168] "Request Body" body=""
	I1213 08:52:32.773087   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:32.773337   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:33.273004   47783 type.go:168] "Request Body" body=""
	I1213 08:52:33.273082   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:33.273417   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:33.273472   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:33.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:52:33.773088   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:33.773405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:34.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:52:34.273120   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:34.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:34.773152   47783 type.go:168] "Request Body" body=""
	I1213 08:52:34.773227   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:34.773558   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:35.273270   47783 type.go:168] "Request Body" body=""
	I1213 08:52:35.273350   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:35.273680   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:35.273739   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:35.773621   47783 type.go:168] "Request Body" body=""
	I1213 08:52:35.773688   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:35.773944   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:36.273767   47783 type.go:168] "Request Body" body=""
	I1213 08:52:36.273909   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:36.274226   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:36.773955   47783 type.go:168] "Request Body" body=""
	I1213 08:52:36.774030   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:36.774359   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:37.273017   47783 type.go:168] "Request Body" body=""
	I1213 08:52:37.273085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:37.273361   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:37.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:52:37.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:37.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:37.773513   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:38.273181   47783 type.go:168] "Request Body" body=""
	I1213 08:52:38.273262   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:38.273612   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:38.773291   47783 type.go:168] "Request Body" body=""
	I1213 08:52:38.773360   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:38.773665   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:39.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:52:39.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:39.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:39.773368   47783 type.go:168] "Request Body" body=""
	I1213 08:52:39.773449   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:39.773774   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:39.773830   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:40.273560   47783 type.go:168] "Request Body" body=""
	I1213 08:52:40.273630   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:40.273887   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:40.773805   47783 type.go:168] "Request Body" body=""
	I1213 08:52:40.773877   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:40.774208   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:41.273838   47783 type.go:168] "Request Body" body=""
	I1213 08:52:41.273922   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:41.274250   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:41.774001   47783 type.go:168] "Request Body" body=""
	I1213 08:52:41.774079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:41.774332   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:41.774381   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:42.273093   47783 type.go:168] "Request Body" body=""
	I1213 08:52:42.273177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:42.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:42.773228   47783 type.go:168] "Request Body" body=""
	I1213 08:52:42.773303   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:42.773598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:43.273031   47783 type.go:168] "Request Body" body=""
	I1213 08:52:43.273104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:43.273352   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:43.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:52:43.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:43.773424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:44.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:44.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:44.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:44.273560   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:44.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:52:44.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:44.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:45.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:52:45.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:45.273519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:45.773133   47783 type.go:168] "Request Body" body=""
	I1213 08:52:45.773210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:45.773516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:46.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:52:46.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:46.273568   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:46.273635   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:46.773088   47783 type.go:168] "Request Body" body=""
	I1213 08:52:46.773186   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:46.773520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:47.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:52:47.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:47.273491   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:47.774025   47783 type.go:168] "Request Body" body=""
	I1213 08:52:47.774092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:47.774383   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:48.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:52:48.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:48.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:48.773232   47783 type.go:168] "Request Body" body=""
	I1213 08:52:48.773322   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:48.773644   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:48.773698   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:49.273023   47783 type.go:168] "Request Body" body=""
	I1213 08:52:49.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:49.273414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:49.773139   47783 type.go:168] "Request Body" body=""
	I1213 08:52:49.773208   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:49.773528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:50.273274   47783 type.go:168] "Request Body" body=""
	I1213 08:52:50.273347   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:50.273702   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:50.773600   47783 type.go:168] "Request Body" body=""
	I1213 08:52:50.773672   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:50.773927   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:50.773976   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:51.273754   47783 type.go:168] "Request Body" body=""
	I1213 08:52:51.273831   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:51.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:51.773932   47783 type.go:168] "Request Body" body=""
	I1213 08:52:51.774002   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:51.774339   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:52.273970   47783 type.go:168] "Request Body" body=""
	I1213 08:52:52.274046   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:52.274330   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:52.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:52:52.773127   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:52.773475   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:53.273012   47783 type.go:168] "Request Body" body=""
	I1213 08:52:53.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:53.273405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:53.273464   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:53.773963   47783 type.go:168] "Request Body" body=""
	I1213 08:52:53.774030   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:53.774299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:54.272999   47783 type.go:168] "Request Body" body=""
	I1213 08:52:54.273074   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:54.273401   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:54.773125   47783 type.go:168] "Request Body" body=""
	I1213 08:52:54.773203   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:54.773544   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:55.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:52:55.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:55.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:55.773100   47783 type.go:168] "Request Body" body=""
	I1213 08:52:55.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:55.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:55.773542   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:56.273181   47783 type.go:168] "Request Body" body=""
	I1213 08:52:56.273253   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:56.273578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:56.773148   47783 type.go:168] "Request Body" body=""
	I1213 08:52:56.773218   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:56.773518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:57.273086   47783 type.go:168] "Request Body" body=""
	I1213 08:52:57.273163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:57.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:57.773089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:57.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:57.773480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:58.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:52:58.273119   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:58.273451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:58.273511   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:58.773075   47783 type.go:168] "Request Body" body=""
	I1213 08:52:58.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:58.773467   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:59.273192   47783 type.go:168] "Request Body" body=""
	I1213 08:52:59.273273   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:59.273605   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:59.773326   47783 type.go:168] "Request Body" body=""
	I1213 08:52:59.773396   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:59.773672   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:00.273172   47783 type.go:168] "Request Body" body=""
	I1213 08:53:00.273263   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:00.273742   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:00.273809   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:00.773616   47783 type.go:168] "Request Body" body=""
	I1213 08:53:00.773702   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:00.774032   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:01.273744   47783 type.go:168] "Request Body" body=""
	I1213 08:53:01.273814   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:01.274061   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:01.773849   47783 type.go:168] "Request Body" body=""
	I1213 08:53:01.773919   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:01.774245   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:02.272981   47783 type.go:168] "Request Body" body=""
	I1213 08:53:02.273054   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:02.273367   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:02.773046   47783 type.go:168] "Request Body" body=""
	I1213 08:53:02.773118   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:02.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:02.773469   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:03.273079   47783 type.go:168] "Request Body" body=""
	I1213 08:53:03.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:03.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:03.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:53:03.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:03.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:04.273172   47783 type.go:168] "Request Body" body=""
	I1213 08:53:04.273245   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:04.273563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:04.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:04.773147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:04.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:04.773531   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:05.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:53:05.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:05.273471   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:05.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:53:05.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:05.773436   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:06.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:53:06.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:06.273509   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:06.773214   47783 type.go:168] "Request Body" body=""
	I1213 08:53:06.773296   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:06.773613   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:06.773678   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:07.273027   47783 type.go:168] "Request Body" body=""
	I1213 08:53:07.273103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:07.273397   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:07.773074   47783 type.go:168] "Request Body" body=""
	I1213 08:53:07.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:07.773484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:08.273184   47783 type.go:168] "Request Body" body=""
	I1213 08:53:08.273274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:08.273603   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:08.773033   47783 type.go:168] "Request Body" body=""
	I1213 08:53:08.773101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:08.773392   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:09.273719   47783 type.go:168] "Request Body" body=""
	I1213 08:53:09.273791   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:09.274102   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:09.274155   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:09.773020   47783 type.go:168] "Request Body" body=""
	I1213 08:53:09.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:09.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:10.273187   47783 type.go:168] "Request Body" body=""
	I1213 08:53:10.273265   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:10.273558   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:10.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:53:10.773290   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:10.773639   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:11.273348   47783 type.go:168] "Request Body" body=""
	I1213 08:53:11.273428   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:11.273762   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:11.773332   47783 type.go:168] "Request Body" body=""
	I1213 08:53:11.773399   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:11.773706   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:11.773757   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:12.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:53:12.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:12.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:12.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:12.773181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:12.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:13.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:13.273148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:13.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:13.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:53:13.773154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:13.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:14.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:53:14.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:14.273486   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:14.273541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:14.773070   47783 type.go:168] "Request Body" body=""
	I1213 08:53:14.773144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:14.773470   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:15.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:53:15.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:15.273487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:15.773087   47783 type.go:168] "Request Body" body=""
	I1213 08:53:15.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:15.773478   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:16.272941   47783 type.go:168] "Request Body" body=""
	I1213 08:53:16.273007   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:16.273251   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:16.774037   47783 type.go:168] "Request Body" body=""
	I1213 08:53:16.774124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:16.774507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:16.774560   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:17.273219   47783 type.go:168] "Request Body" body=""
	I1213 08:53:17.273292   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:17.273621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:17.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:53:17.773095   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:17.773354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:18.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:18.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:18.273472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:18.773188   47783 type.go:168] "Request Body" body=""
	I1213 08:53:18.773268   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:18.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:19.272964   47783 type.go:168] "Request Body" body=""
	I1213 08:53:19.273032   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:19.273352   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:19.273401   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:19.773060   47783 type.go:168] "Request Body" body=""
	I1213 08:53:19.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:19.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:20.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:53:20.273152   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:20.273506   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:20.773068   47783 type.go:168] "Request Body" body=""
	I1213 08:53:20.773137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:20.773437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:21.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:53:21.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:21.273479   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:21.273526   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:21.773202   47783 type.go:168] "Request Body" body=""
	I1213 08:53:21.773283   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:21.773621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:22.273008   47783 type.go:168] "Request Body" body=""
	I1213 08:53:22.273079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:22.273390   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:22.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:53:22.773136   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:22.773465   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:23.273163   47783 type.go:168] "Request Body" body=""
	I1213 08:53:23.273237   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:23.273573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:23.273630   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:23.772972   47783 type.go:168] "Request Body" body=""
	I1213 08:53:23.773041   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:23.773344   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:24.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:53:24.273160   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:24.273485   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:24.773194   47783 type.go:168] "Request Body" body=""
	I1213 08:53:24.773266   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:24.773573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:25.273008   47783 type.go:168] "Request Body" body=""
	I1213 08:53:25.273080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:25.273326   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:25.773135   47783 type.go:168] "Request Body" body=""
	I1213 08:53:25.773214   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:25.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:25.773610   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:26.273236   47783 type.go:168] "Request Body" body=""
	I1213 08:53:26.273334   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:26.273645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:26.773015   47783 type.go:168] "Request Body" body=""
	I1213 08:53:26.773082   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:26.773414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:27.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:27.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:27.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:27.773212   47783 type.go:168] "Request Body" body=""
	I1213 08:53:27.773288   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:27.773611   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:27.773662   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:28.273082   47783 type.go:168] "Request Body" body=""
	I1213 08:53:28.273158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:28.273448   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:28.773085   47783 type.go:168] "Request Body" body=""
	I1213 08:53:28.773154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:28.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:29.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:53:29.273196   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:29.273483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:29.773239   47783 type.go:168] "Request Body" body=""
	I1213 08:53:29.773318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:29.773573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:30.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:53:30.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:30.273542   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:30.273593   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:30.773082   47783 type.go:168] "Request Body" body=""
	I1213 08:53:30.773174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:30.773474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:31.273124   47783 type.go:168] "Request Body" body=""
	I1213 08:53:31.273195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:31.273460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:31.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:53:31.773163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:31.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:32.273159   47783 type.go:168] "Request Body" body=""
	I1213 08:53:32.273240   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:32.273577   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:32.273635   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:32.772992   47783 type.go:168] "Request Body" body=""
	I1213 08:53:32.773056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:32.773301   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:33.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:53:33.273144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:33.273448   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:33.773096   47783 type.go:168] "Request Body" body=""
	I1213 08:53:33.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:33.773514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:34.273042   47783 type.go:168] "Request Body" body=""
	I1213 08:53:34.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:34.273432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:34.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:53:34.773216   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:34.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:34.773545   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:35.273073   47783 type.go:168] "Request Body" body=""
	I1213 08:53:35.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:35.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:35.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:53:35.773105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:35.773404   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:36.273023   47783 type.go:168] "Request Body" body=""
	I1213 08:53:36.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:36.273475   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:36.773061   47783 type.go:168] "Request Body" body=""
	I1213 08:53:36.773145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:36.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:37.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:37.273135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:37.273382   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:37.273421   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:37.773097   47783 type.go:168] "Request Body" body=""
	I1213 08:53:37.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:37.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:38.273189   47783 type.go:168] "Request Body" body=""
	I1213 08:53:38.273267   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:38.273579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:38.773028   47783 type.go:168] "Request Body" body=""
	I1213 08:53:38.773102   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:38.773347   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:39.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:53:39.273143   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:39.273470   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:39.273519   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:39.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:53:39.773288   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:39.773599   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:40.273048   47783 type.go:168] "Request Body" body=""
	I1213 08:53:40.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:40.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:40.773077   47783 type.go:168] "Request Body" body=""
	I1213 08:53:40.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:40.773508   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:41.273216   47783 type.go:168] "Request Body" body=""
	I1213 08:53:41.273298   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:41.273616   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:41.273672   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:41.772986   47783 type.go:168] "Request Body" body=""
	I1213 08:53:41.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:41.773356   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:42.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:42.273143   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:42.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:42.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:42.773184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:42.773518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:43.273053   47783 type.go:168] "Request Body" body=""
	I1213 08:53:43.273132   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:43.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:43.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:43.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:43.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:43.773551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:44.273243   47783 type.go:168] "Request Body" body=""
	I1213 08:53:44.273326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:44.273652   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:44.773050   47783 type.go:168] "Request Body" body=""
	I1213 08:53:44.773126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:44.773461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:45.273208   47783 type.go:168] "Request Body" body=""
	I1213 08:53:45.273289   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:45.273720   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:45.773581   47783 type.go:168] "Request Body" body=""
	I1213 08:53:45.773651   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:45.773963   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:45.774017   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:46.275629   47783 type.go:168] "Request Body" body=""
	I1213 08:53:46.275703   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:46.275961   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:46.773754   47783 type.go:168] "Request Body" body=""
	I1213 08:53:46.773827   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:46.774161   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:47.273968   47783 type.go:168] "Request Body" body=""
	I1213 08:53:47.274043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:47.274351   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:47.773011   47783 type.go:168] "Request Body" body=""
	I1213 08:53:47.773096   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:47.773357   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:48.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:53:48.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:48.273530   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:48.273587   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:48.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:48.773183   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:48.773523   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:49.273149   47783 type.go:168] "Request Body" body=""
	I1213 08:53:49.273234   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:49.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:49.773403   47783 type.go:168] "Request Body" body=""
	I1213 08:53:49.773482   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:49.773819   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:50.273603   47783 type.go:168] "Request Body" body=""
	I1213 08:53:50.273680   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:50.273953   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:50.273999   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:50.773802   47783 type.go:168] "Request Body" body=""
	I1213 08:53:50.773880   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:50.774158   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:51.273956   47783 type.go:168] "Request Body" body=""
	I1213 08:53:51.274028   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:51.274317   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:51.772988   47783 type.go:168] "Request Body" body=""
	I1213 08:53:51.773063   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:51.773397   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:52.273098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:52.273170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:52.273477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:52.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:53:52.773187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:52.773515   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:52.773572   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:53.273241   47783 type.go:168] "Request Body" body=""
	I1213 08:53:53.273318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:53.273661   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:53.773054   47783 type.go:168] "Request Body" body=""
	I1213 08:53:53.773123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:53.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:54.273076   47783 type.go:168] "Request Body" body=""
	I1213 08:53:54.273156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:54.273507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:54.773088   47783 type.go:168] "Request Body" body=""
	I1213 08:53:54.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:54.773454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:55.273120   47783 type.go:168] "Request Body" body=""
	I1213 08:53:55.273215   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:55.273569   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:55.273618   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:55.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:55.773179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:55.773491   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:56.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:53:56.273170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:56.273651   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:56.773063   47783 type.go:168] "Request Body" body=""
	I1213 08:53:56.773135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:56.773385   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:57.273085   47783 type.go:168] "Request Body" body=""
	I1213 08:53:57.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:57.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:57.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:53:57.773297   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:57.773605   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:57.773654   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:58.273317   47783 type.go:168] "Request Body" body=""
	I1213 08:53:58.273402   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:58.273675   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:58.773360   47783 type.go:168] "Request Body" body=""
	I1213 08:53:58.773432   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:58.773734   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:59.273441   47783 type.go:168] "Request Body" body=""
	I1213 08:53:59.273519   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:59.273831   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:59.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:53:59.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:59.773701   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:59.773764   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:54:00.273228   47783 type.go:168] "Request Body" body=""
	I1213 08:54:00.273367   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:54:00.273821   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:54:00.773873   47783 type.go:168] "Request Body" body=""
	I1213 08:54:00.773932   47783 node_ready.go:38] duration metric: took 6m0.00107019s for node "functional-074420" to be "Ready" ...
	I1213 08:54:00.777004   47783 out.go:203] 
	W1213 08:54:00.779921   47783 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 08:54:00.779957   47783 out.go:285] * 
	W1213 08:54:00.782360   47783 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:54:00.785205   47783 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978483314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978503105Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978537772Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978553321Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978574055Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978590318Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978599738Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978615877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978633297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978662992Z" level=info msg="Connect containerd service"
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978978999Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.979550081Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.000071290Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.000154589Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.000504016Z" level=info msg="Start subscribing containerd event"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.000574737Z" level=info msg="Start recovering state"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.043623992Z" level=info msg="Start event monitor"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.043840191Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.043918469Z" level=info msg="Start streaming server"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.043996361Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.044117560Z" level=info msg="runtime interface starting up..."
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.044176883Z" level=info msg="starting plugins..."
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.044246234Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 08:47:58 functional-074420 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.046961351Z" level=info msg="containerd successfully booted in 0.089510s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:54:02.902033    8443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:02.902685    8443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:02.904381    8443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:02.904999    8443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:02.906647    8443 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 08:54:02 up 36 min,  0 user,  load average: 0.20, 0.29, 0.49
	Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 08:53:59 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:00 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 13 08:54:00 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:00 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:00 functional-074420 kubelet[8329]: E1213 08:54:00.566956    8329 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:00 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:00 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:01 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 13 08:54:01 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:01 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:01 functional-074420 kubelet[8334]: E1213 08:54:01.348056    8334 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:01 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:01 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:02 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 13 08:54:02 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:02 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:02 functional-074420 kubelet[8354]: E1213 08:54:02.106179    8354 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:02 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:02 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:02 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 13 08:54:02 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:02 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:02 functional-074420 kubelet[8424]: E1213 08:54:02.830782    8424 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:02 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:02 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 2 (388.392478ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-074420 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-074420 get po -A: exit status 1 (60.925546ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-074420 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-074420 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-074420 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	        "Created": "2025-12-13T08:39:40.050933605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:39:40.114566965Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
	        "Name": "/functional-074420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	                "LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-074420",
	                "Source": "/var/lib/docker/volumes/functional-074420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074420",
	                "name.minikube.sigs.k8s.io": "functional-074420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
	            "SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e0:c5:f8:aa:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
	                    "EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074420",
	                        "662fb3d52b3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 2 (305.483695ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-049633 ssh findmnt -T /mount-9p | grep 9p                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ ssh            │ functional-049633 ssh findmnt -T /mount-9p | grep 9p                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh -- ls -la /mount-9p                                                                                                               │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh sudo umount -f /mount-9p                                                                                                          │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ mount          │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount1 --alsologtostderr -v=1                                      │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ mount          │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount3 --alsologtostderr -v=1                                      │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ mount          │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount2 --alsologtostderr -v=1                                      │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ ssh            │ functional-049633 ssh findmnt -T /mount1                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ update-context │ functional-049633 update-context --alsologtostderr -v=2                                                                                                 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ update-context │ functional-049633 update-context --alsologtostderr -v=2                                                                                                 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ update-context │ functional-049633 update-context --alsologtostderr -v=2                                                                                                 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image ls --format short --alsologtostderr                                                                                             │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh findmnt -T /mount1                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh pgrep buildkitd                                                                                                                   │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ ssh            │ functional-049633 ssh findmnt -T /mount2                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ ssh            │ functional-049633 ssh findmnt -T /mount3                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image build -t localhost/my-image:functional-049633 testdata/build --alsologtostderr                                                  │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ mount          │ -p functional-049633 --kill=true                                                                                                                        │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ image          │ functional-049633 image ls --format yaml --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image ls --format json --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image ls --format table --alsologtostderr                                                                                             │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image          │ functional-049633 image ls                                                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ delete         │ -p functional-049633                                                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ start          │ -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ start          │ -p functional-074420 --alsologtostderr -v=8                                                                                                             │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:47 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:47:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:47:55.372522   47783 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:47:55.372733   47783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:47:55.372759   47783 out.go:374] Setting ErrFile to fd 2...
	I1213 08:47:55.372779   47783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:47:55.373071   47783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:47:55.373500   47783 out.go:368] Setting JSON to false
	I1213 08:47:55.374339   47783 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1828,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:47:55.374435   47783 start.go:143] virtualization:  
	I1213 08:47:55.378014   47783 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:47:55.381059   47783 notify.go:221] Checking for updates...
	I1213 08:47:55.381456   47783 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:47:55.384645   47783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:47:55.387475   47783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:55.390285   47783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:47:55.393179   47783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 08:47:55.396170   47783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:47:55.399625   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:55.399723   47783 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:47:55.421152   47783 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:47:55.421278   47783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:47:55.479286   47783 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 08:47:55.469949512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:47:55.479410   47783 docker.go:319] overlay module found
	I1213 08:47:55.482469   47783 out.go:179] * Using the docker driver based on existing profile
	I1213 08:47:55.485237   47783 start.go:309] selected driver: docker
	I1213 08:47:55.485259   47783 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:55.485359   47783 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:47:55.485469   47783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:47:55.552137   47783 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 08:47:55.542465837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:47:55.552549   47783 cni.go:84] Creating CNI manager for ""
	I1213 08:47:55.552614   47783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:47:55.552664   47783 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:55.555904   47783 out.go:179] * Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	I1213 08:47:55.558801   47783 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 08:47:55.561846   47783 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:47:55.564866   47783 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:47:55.564922   47783 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 08:47:55.564938   47783 cache.go:65] Caching tarball of preloaded images
	I1213 08:47:55.564963   47783 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:47:55.565027   47783 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 08:47:55.565039   47783 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 08:47:55.565188   47783 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
	I1213 08:47:55.585020   47783 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:47:55.585044   47783 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:47:55.585064   47783 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:47:55.585094   47783 start.go:360] acquireMachinesLock for functional-074420: {Name:mk9a8356bf81e58530d2c2996b4da0b7487171c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:47:55.585169   47783 start.go:364] duration metric: took 45.161µs to acquireMachinesLock for "functional-074420"
	I1213 08:47:55.585195   47783 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:47:55.585204   47783 fix.go:54] fixHost starting: 
	I1213 08:47:55.585456   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:55.601925   47783 fix.go:112] recreateIfNeeded on functional-074420: state=Running err=<nil>
	W1213 08:47:55.601956   47783 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:47:55.605110   47783 out.go:252] * Updating the running docker "functional-074420" container ...
	I1213 08:47:55.605143   47783 machine.go:94] provisionDockerMachine start ...
	I1213 08:47:55.605228   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.622184   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.622521   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.622536   47783 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:47:55.770899   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:47:55.770923   47783 ubuntu.go:182] provisioning hostname "functional-074420"
	I1213 08:47:55.770990   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.788917   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.789224   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.789243   47783 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-074420 && echo "functional-074420" | sudo tee /etc/hostname
	I1213 08:47:55.944141   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:47:55.944216   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.963276   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.963669   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.963693   47783 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-074420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-074420/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-074420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:47:56.123813   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:47:56.123839   47783 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 08:47:56.123865   47783 ubuntu.go:190] setting up certificates
	I1213 08:47:56.123875   47783 provision.go:84] configureAuth start
	I1213 08:47:56.123935   47783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:47:56.141934   47783 provision.go:143] copyHostCerts
	I1213 08:47:56.141983   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:47:56.142030   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 08:47:56.142044   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:47:56.142121   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 08:47:56.142216   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:47:56.142238   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 08:47:56.142247   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:47:56.142276   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 08:47:56.142329   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:47:56.142361   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 08:47:56.142370   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:47:56.142397   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 08:47:56.142457   47783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.functional-074420 san=[127.0.0.1 192.168.49.2 functional-074420 localhost minikube]
	I1213 08:47:56.320875   47783 provision.go:177] copyRemoteCerts
	I1213 08:47:56.320949   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:47:56.320994   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.338054   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.442993   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 08:47:56.443052   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:47:56.459467   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 08:47:56.459650   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:47:56.476836   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 08:47:56.476894   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 08:47:56.494408   47783 provision.go:87] duration metric: took 370.509157ms to configureAuth
	I1213 08:47:56.494435   47783 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:47:56.494611   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:56.494624   47783 machine.go:97] duration metric: took 889.474725ms to provisionDockerMachine
	I1213 08:47:56.494633   47783 start.go:293] postStartSetup for "functional-074420" (driver="docker")
	I1213 08:47:56.494644   47783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:47:56.494700   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:47:56.494748   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.511710   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.615158   47783 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:47:56.618357   47783 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 08:47:56.618378   47783 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 08:47:56.618383   47783 command_runner.go:130] > VERSION_ID="12"
	I1213 08:47:56.618388   47783 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 08:47:56.618392   47783 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 08:47:56.618422   47783 command_runner.go:130] > ID=debian
	I1213 08:47:56.618436   47783 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 08:47:56.618441   47783 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 08:47:56.618448   47783 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 08:47:56.618517   47783 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:47:56.618537   47783 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:47:56.618550   47783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 08:47:56.618607   47783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 08:47:56.618691   47783 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 08:47:56.618702   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> /etc/ssl/certs/41202.pem
	I1213 08:47:56.618783   47783 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1213 08:47:56.618792   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> /etc/test/nested/copy/4120/hosts
	I1213 08:47:56.618842   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1213 08:47:56.626162   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:47:56.643608   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1213 08:47:56.661460   47783 start.go:296] duration metric: took 166.811201ms for postStartSetup
	I1213 08:47:56.661553   47783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:47:56.661603   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.678627   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.785005   47783 command_runner.go:130] > 14%
	I1213 08:47:56.785418   47783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:47:56.789762   47783 command_runner.go:130] > 169G
	I1213 08:47:56.790146   47783 fix.go:56] duration metric: took 1.204938515s for fixHost
	I1213 08:47:56.790168   47783 start.go:83] releasing machines lock for "functional-074420", held for 1.204983079s
	I1213 08:47:56.790231   47783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:47:56.811813   47783 ssh_runner.go:195] Run: cat /version.json
	I1213 08:47:56.811877   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.812180   47783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:47:56.812227   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.839131   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.843453   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:57.035647   47783 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 08:47:57.038511   47783 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 08:47:57.038690   47783 ssh_runner.go:195] Run: systemctl --version
	I1213 08:47:57.044708   47783 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 08:47:57.044761   47783 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 08:47:57.045134   47783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 08:47:57.049401   47783 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 08:47:57.049443   47783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:47:57.049503   47783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:47:57.057127   47783 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:47:57.057158   47783 start.go:496] detecting cgroup driver to use...
	I1213 08:47:57.057211   47783 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:47:57.057279   47783 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 08:47:57.072743   47783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:47:57.086014   47783 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:47:57.086118   47783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:47:57.102029   47783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:47:57.115088   47783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:47:57.226726   47783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:47:57.347870   47783 docker.go:234] disabling docker service ...
	I1213 08:47:57.347940   47783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:47:57.363202   47783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:47:57.377010   47783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:47:57.506500   47783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:47:57.649131   47783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:47:57.662497   47783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:47:57.677018   47783 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 08:47:57.678207   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:47:57.688555   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 08:47:57.698272   47783 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:47:57.698370   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:47:57.707500   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:47:57.716692   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:47:57.725739   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:47:57.734886   47783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:47:57.743485   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:47:57.753073   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:47:57.761993   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:47:57.770719   47783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:47:57.777695   47783 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 08:47:57.778683   47783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:47:57.786237   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:57.908393   47783 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:47:58.046253   47783 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 08:47:58.046368   47783 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 08:47:58.050493   47783 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 08:47:58.050558   47783 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 08:47:58.050578   47783 command_runner.go:130] > Device: 0,73	Inode: 1610        Links: 1
	I1213 08:47:58.050603   47783 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:47:58.050636   47783 command_runner.go:130] > Access: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050663   47783 command_runner.go:130] > Modify: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050685   47783 command_runner.go:130] > Change: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050720   47783 command_runner.go:130] >  Birth: -
	I1213 08:47:58.050927   47783 start.go:564] Will wait 60s for crictl version
	I1213 08:47:58.051002   47783 ssh_runner.go:195] Run: which crictl
	I1213 08:47:58.054661   47783 command_runner.go:130] > /usr/local/bin/crictl
	I1213 08:47:58.054852   47783 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:47:58.077876   47783 command_runner.go:130] > Version:  0.1.0
	I1213 08:47:58.077939   47783 command_runner.go:130] > RuntimeName:  containerd
	I1213 08:47:58.077961   47783 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 08:47:58.077985   47783 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 08:47:58.080051   47783 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 08:47:58.080159   47783 ssh_runner.go:195] Run: containerd --version
	I1213 08:47:58.100302   47783 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 08:47:58.101953   47783 ssh_runner.go:195] Run: containerd --version
	I1213 08:47:58.119235   47783 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 08:47:58.126521   47783 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 08:47:58.129463   47783 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:47:58.145273   47783 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 08:47:58.149369   47783 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 08:47:58.149453   47783 kubeadm.go:884] updating cluster {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:47:58.149580   47783 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:47:58.149657   47783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:47:58.174191   47783 command_runner.go:130] > {
	I1213 08:47:58.174214   47783 command_runner.go:130] >   "images":  [
	I1213 08:47:58.174219   47783 command_runner.go:130] >     {
	I1213 08:47:58.174232   47783 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 08:47:58.174237   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174242   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 08:47:58.174246   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174250   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174259   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 08:47:58.174263   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174267   47783 command_runner.go:130] >       "size":  "40636774",
	I1213 08:47:58.174271   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174275   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174278   47783 command_runner.go:130] >     },
	I1213 08:47:58.174281   47783 command_runner.go:130] >     {
	I1213 08:47:58.174289   47783 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 08:47:58.174299   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174305   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 08:47:58.174308   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174313   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174321   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 08:47:58.174328   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174332   47783 command_runner.go:130] >       "size":  "8034419",
	I1213 08:47:58.174336   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174340   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174343   47783 command_runner.go:130] >     },
	I1213 08:47:58.174349   47783 command_runner.go:130] >     {
	I1213 08:47:58.174356   47783 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 08:47:58.174361   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174366   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 08:47:58.174371   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174384   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174395   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 08:47:58.174399   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174403   47783 command_runner.go:130] >       "size":  "21168808",
	I1213 08:47:58.174409   47783 command_runner.go:130] >       "username":  "nonroot",
	I1213 08:47:58.174417   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174421   47783 command_runner.go:130] >     },
	I1213 08:47:58.174424   47783 command_runner.go:130] >     {
	I1213 08:47:58.174430   47783 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 08:47:58.174436   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174441   47783 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 08:47:58.174444   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174449   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174458   47783 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 08:47:58.174464   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174468   47783 command_runner.go:130] >       "size":  "21136588",
	I1213 08:47:58.174472   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174475   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174478   47783 command_runner.go:130] >       },
	I1213 08:47:58.174487   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174491   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174494   47783 command_runner.go:130] >     },
	I1213 08:47:58.174497   47783 command_runner.go:130] >     {
	I1213 08:47:58.174507   47783 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 08:47:58.174511   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174518   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 08:47:58.174522   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174526   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174533   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 08:47:58.174539   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174545   47783 command_runner.go:130] >       "size":  "24678359",
	I1213 08:47:58.174551   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174559   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174562   47783 command_runner.go:130] >       },
	I1213 08:47:58.174566   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174576   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174580   47783 command_runner.go:130] >     },
	I1213 08:47:58.174584   47783 command_runner.go:130] >     {
	I1213 08:47:58.174594   47783 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 08:47:58.174601   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174607   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 08:47:58.174610   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174614   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174625   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 08:47:58.174631   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174635   47783 command_runner.go:130] >       "size":  "20661043",
	I1213 08:47:58.174638   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174642   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174645   47783 command_runner.go:130] >       },
	I1213 08:47:58.174649   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174655   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174659   47783 command_runner.go:130] >     },
	I1213 08:47:58.174663   47783 command_runner.go:130] >     {
	I1213 08:47:58.174671   47783 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 08:47:58.174677   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174681   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 08:47:58.174684   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174688   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174699   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 08:47:58.174704   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174709   47783 command_runner.go:130] >       "size":  "22429671",
	I1213 08:47:58.174713   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174716   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174721   47783 command_runner.go:130] >     },
	I1213 08:47:58.174725   47783 command_runner.go:130] >     {
	I1213 08:47:58.174732   47783 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 08:47:58.174742   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174747   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 08:47:58.174753   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174757   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174765   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 08:47:58.174774   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174781   47783 command_runner.go:130] >       "size":  "15391364",
	I1213 08:47:58.174784   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174788   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174797   47783 command_runner.go:130] >       },
	I1213 08:47:58.174802   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174808   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174811   47783 command_runner.go:130] >     },
	I1213 08:47:58.174814   47783 command_runner.go:130] >     {
	I1213 08:47:58.174821   47783 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 08:47:58.174828   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174833   47783 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 08:47:58.174836   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174840   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174848   47783 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 08:47:58.174851   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174855   47783 command_runner.go:130] >       "size":  "267939",
	I1213 08:47:58.174860   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174864   47783 command_runner.go:130] >         "value":  "65535"
	I1213 08:47:58.174875   47783 command_runner.go:130] >       },
	I1213 08:47:58.174880   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174884   47783 command_runner.go:130] >       "pinned":  true
	I1213 08:47:58.174887   47783 command_runner.go:130] >     }
	I1213 08:47:58.174890   47783 command_runner.go:130] >   ]
	I1213 08:47:58.174893   47783 command_runner.go:130] > }
	I1213 08:47:58.175043   47783 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:47:58.175056   47783 containerd.go:534] Images already preloaded, skipping extraction
	I1213 08:47:58.175117   47783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:47:58.196592   47783 command_runner.go:130] > {
	I1213 08:47:58.196612   47783 command_runner.go:130] >   "images":  [
	I1213 08:47:58.196616   47783 command_runner.go:130] >     {
	I1213 08:47:58.196626   47783 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 08:47:58.196631   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196637   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 08:47:58.196641   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196644   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196654   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 08:47:58.196660   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196664   47783 command_runner.go:130] >       "size":  "40636774",
	I1213 08:47:58.196674   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196678   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196682   47783 command_runner.go:130] >     },
	I1213 08:47:58.196685   47783 command_runner.go:130] >     {
	I1213 08:47:58.196701   47783 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 08:47:58.196710   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196715   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 08:47:58.196719   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196723   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196732   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 08:47:58.196739   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196745   47783 command_runner.go:130] >       "size":  "8034419",
	I1213 08:47:58.196753   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196757   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196764   47783 command_runner.go:130] >     },
	I1213 08:47:58.196768   47783 command_runner.go:130] >     {
	I1213 08:47:58.196782   47783 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 08:47:58.196787   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196793   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 08:47:58.196798   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196807   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196825   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 08:47:58.196833   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196838   47783 command_runner.go:130] >       "size":  "21168808",
	I1213 08:47:58.196847   47783 command_runner.go:130] >       "username":  "nonroot",
	I1213 08:47:58.196852   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196861   47783 command_runner.go:130] >     },
	I1213 08:47:58.196864   47783 command_runner.go:130] >     {
	I1213 08:47:58.196871   47783 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 08:47:58.196875   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196880   47783 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 08:47:58.196884   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196888   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196897   47783 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 08:47:58.196904   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196908   47783 command_runner.go:130] >       "size":  "21136588",
	I1213 08:47:58.196912   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.196916   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.196924   47783 command_runner.go:130] >       },
	I1213 08:47:58.196929   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196936   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196940   47783 command_runner.go:130] >     },
	I1213 08:47:58.196943   47783 command_runner.go:130] >     {
	I1213 08:47:58.196953   47783 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 08:47:58.196958   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196963   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 08:47:58.196968   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196973   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196984   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 08:47:58.196993   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196998   47783 command_runner.go:130] >       "size":  "24678359",
	I1213 08:47:58.197005   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197015   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197022   47783 command_runner.go:130] >       },
	I1213 08:47:58.197030   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197034   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197037   47783 command_runner.go:130] >     },
	I1213 08:47:58.197040   47783 command_runner.go:130] >     {
	I1213 08:47:58.197048   47783 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 08:47:58.197056   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197063   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 08:47:58.197069   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197074   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197086   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 08:47:58.197094   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197098   47783 command_runner.go:130] >       "size":  "20661043",
	I1213 08:47:58.197105   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197109   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197113   47783 command_runner.go:130] >       },
	I1213 08:47:58.197117   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197121   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197126   47783 command_runner.go:130] >     },
	I1213 08:47:58.197129   47783 command_runner.go:130] >     {
	I1213 08:47:58.197140   47783 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 08:47:58.197144   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197154   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 08:47:58.197158   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197162   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197173   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 08:47:58.197180   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197185   47783 command_runner.go:130] >       "size":  "22429671",
	I1213 08:47:58.197189   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197194   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197198   47783 command_runner.go:130] >     },
	I1213 08:47:58.197201   47783 command_runner.go:130] >     {
	I1213 08:47:58.197209   47783 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 08:47:58.197216   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197225   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 08:47:58.197232   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197237   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197248   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 08:47:58.197255   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197259   47783 command_runner.go:130] >       "size":  "15391364",
	I1213 08:47:58.197266   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197270   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197273   47783 command_runner.go:130] >       },
	I1213 08:47:58.197279   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197283   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197286   47783 command_runner.go:130] >     },
	I1213 08:47:58.197289   47783 command_runner.go:130] >     {
	I1213 08:47:58.197296   47783 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 08:47:58.197304   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197309   47783 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 08:47:58.197313   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197320   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197329   47783 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 08:47:58.197335   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197339   47783 command_runner.go:130] >       "size":  "267939",
	I1213 08:47:58.197346   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197351   47783 command_runner.go:130] >         "value":  "65535"
	I1213 08:47:58.197356   47783 command_runner.go:130] >       },
	I1213 08:47:58.197362   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197366   47783 command_runner.go:130] >       "pinned":  true
	I1213 08:47:58.197369   47783 command_runner.go:130] >     }
	I1213 08:47:58.197372   47783 command_runner.go:130] >   ]
	I1213 08:47:58.197375   47783 command_runner.go:130] > }
	I1213 08:47:58.199421   47783 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:47:58.199439   47783 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:47:58.199455   47783 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 08:47:58.199601   47783 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-074420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:47:58.199669   47783 ssh_runner.go:195] Run: sudo crictl info
	I1213 08:47:58.226280   47783 command_runner.go:130] > {
	I1213 08:47:58.226303   47783 command_runner.go:130] >   "cniconfig": {
	I1213 08:47:58.226310   47783 command_runner.go:130] >     "Networks": [
	I1213 08:47:58.226314   47783 command_runner.go:130] >       {
	I1213 08:47:58.226319   47783 command_runner.go:130] >         "Config": {
	I1213 08:47:58.226324   47783 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 08:47:58.226329   47783 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 08:47:58.226333   47783 command_runner.go:130] >           "Plugins": [
	I1213 08:47:58.226336   47783 command_runner.go:130] >             {
	I1213 08:47:58.226340   47783 command_runner.go:130] >               "Network": {
	I1213 08:47:58.226344   47783 command_runner.go:130] >                 "ipam": {},
	I1213 08:47:58.226350   47783 command_runner.go:130] >                 "type": "loopback"
	I1213 08:47:58.226358   47783 command_runner.go:130] >               },
	I1213 08:47:58.226364   47783 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 08:47:58.226371   47783 command_runner.go:130] >             }
	I1213 08:47:58.226374   47783 command_runner.go:130] >           ],
	I1213 08:47:58.226384   47783 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 08:47:58.226388   47783 command_runner.go:130] >         },
	I1213 08:47:58.226398   47783 command_runner.go:130] >         "IFName": "lo"
	I1213 08:47:58.226402   47783 command_runner.go:130] >       }
	I1213 08:47:58.226405   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226410   47783 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 08:47:58.226415   47783 command_runner.go:130] >     "PluginDirs": [
	I1213 08:47:58.226419   47783 command_runner.go:130] >       "/opt/cni/bin"
	I1213 08:47:58.226425   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226430   47783 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 08:47:58.226442   47783 command_runner.go:130] >     "Prefix": "eth"
	I1213 08:47:58.226445   47783 command_runner.go:130] >   },
	I1213 08:47:58.226448   47783 command_runner.go:130] >   "config": {
	I1213 08:47:58.226454   47783 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 08:47:58.226459   47783 command_runner.go:130] >       "/etc/cdi",
	I1213 08:47:58.226466   47783 command_runner.go:130] >       "/var/run/cdi"
	I1213 08:47:58.226472   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226480   47783 command_runner.go:130] >     "cni": {
	I1213 08:47:58.226484   47783 command_runner.go:130] >       "binDir": "",
	I1213 08:47:58.226487   47783 command_runner.go:130] >       "binDirs": [
	I1213 08:47:58.226491   47783 command_runner.go:130] >         "/opt/cni/bin"
	I1213 08:47:58.226495   47783 command_runner.go:130] >       ],
	I1213 08:47:58.226499   47783 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 08:47:58.226503   47783 command_runner.go:130] >       "confTemplate": "",
	I1213 08:47:58.226507   47783 command_runner.go:130] >       "ipPref": "",
	I1213 08:47:58.226510   47783 command_runner.go:130] >       "maxConfNum": 1,
	I1213 08:47:58.226514   47783 command_runner.go:130] >       "setupSerially": false,
	I1213 08:47:58.226519   47783 command_runner.go:130] >       "useInternalLoopback": false
	I1213 08:47:58.226524   47783 command_runner.go:130] >     },
	I1213 08:47:58.226530   47783 command_runner.go:130] >     "containerd": {
	I1213 08:47:58.226538   47783 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 08:47:58.226543   47783 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 08:47:58.226548   47783 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 08:47:58.226552   47783 command_runner.go:130] >       "runtimes": {
	I1213 08:47:58.226557   47783 command_runner.go:130] >         "runc": {
	I1213 08:47:58.226562   47783 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 08:47:58.226566   47783 command_runner.go:130] >           "PodAnnotations": null,
	I1213 08:47:58.226570   47783 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 08:47:58.226575   47783 command_runner.go:130] >           "cgroupWritable": false,
	I1213 08:47:58.226580   47783 command_runner.go:130] >           "cniConfDir": "",
	I1213 08:47:58.226586   47783 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 08:47:58.226591   47783 command_runner.go:130] >           "io_type": "",
	I1213 08:47:58.226596   47783 command_runner.go:130] >           "options": {
	I1213 08:47:58.226601   47783 command_runner.go:130] >             "BinaryName": "",
	I1213 08:47:58.226607   47783 command_runner.go:130] >             "CriuImagePath": "",
	I1213 08:47:58.226612   47783 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 08:47:58.226616   47783 command_runner.go:130] >             "IoGid": 0,
	I1213 08:47:58.226620   47783 command_runner.go:130] >             "IoUid": 0,
	I1213 08:47:58.226629   47783 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 08:47:58.226633   47783 command_runner.go:130] >             "Root": "",
	I1213 08:47:58.226641   47783 command_runner.go:130] >             "ShimCgroup": "",
	I1213 08:47:58.226649   47783 command_runner.go:130] >             "SystemdCgroup": false
	I1213 08:47:58.226652   47783 command_runner.go:130] >           },
	I1213 08:47:58.226657   47783 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 08:47:58.226666   47783 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 08:47:58.226678   47783 command_runner.go:130] >           "runtimePath": "",
	I1213 08:47:58.226683   47783 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 08:47:58.226689   47783 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 08:47:58.226698   47783 command_runner.go:130] >           "snapshotter": ""
	I1213 08:47:58.226702   47783 command_runner.go:130] >         }
	I1213 08:47:58.226705   47783 command_runner.go:130] >       }
	I1213 08:47:58.226710   47783 command_runner.go:130] >     },
	I1213 08:47:58.226721   47783 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 08:47:58.226728   47783 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 08:47:58.226735   47783 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 08:47:58.226739   47783 command_runner.go:130] >     "disableApparmor": false,
	I1213 08:47:58.226744   47783 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 08:47:58.226748   47783 command_runner.go:130] >     "disableProcMount": false,
	I1213 08:47:58.226753   47783 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 08:47:58.226759   47783 command_runner.go:130] >     "enableCDI": true,
	I1213 08:47:58.226763   47783 command_runner.go:130] >     "enableSelinux": false,
	I1213 08:47:58.226769   47783 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 08:47:58.226775   47783 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 08:47:58.226782   47783 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 08:47:58.226787   47783 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 08:47:58.226797   47783 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 08:47:58.226806   47783 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 08:47:58.226811   47783 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 08:47:58.226819   47783 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 08:47:58.226824   47783 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 08:47:58.226830   47783 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 08:47:58.226837   47783 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 08:47:58.226843   47783 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 08:47:58.226853   47783 command_runner.go:130] >   },
	I1213 08:47:58.226860   47783 command_runner.go:130] >   "features": {
	I1213 08:47:58.226865   47783 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 08:47:58.226868   47783 command_runner.go:130] >   },
	I1213 08:47:58.226872   47783 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 08:47:58.226884   47783 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 08:47:58.226898   47783 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 08:47:58.226903   47783 command_runner.go:130] >   "runtimeHandlers": [
	I1213 08:47:58.226906   47783 command_runner.go:130] >     {
	I1213 08:47:58.226910   47783 command_runner.go:130] >       "features": {
	I1213 08:47:58.226915   47783 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 08:47:58.226921   47783 command_runner.go:130] >         "user_namespaces": true
	I1213 08:47:58.226925   47783 command_runner.go:130] >       }
	I1213 08:47:58.226928   47783 command_runner.go:130] >     },
	I1213 08:47:58.226934   47783 command_runner.go:130] >     {
	I1213 08:47:58.226938   47783 command_runner.go:130] >       "features": {
	I1213 08:47:58.226946   47783 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 08:47:58.226958   47783 command_runner.go:130] >         "user_namespaces": true
	I1213 08:47:58.226962   47783 command_runner.go:130] >       },
	I1213 08:47:58.226965   47783 command_runner.go:130] >       "name": "runc"
	I1213 08:47:58.226968   47783 command_runner.go:130] >     }
	I1213 08:47:58.226971   47783 command_runner.go:130] >   ],
	I1213 08:47:58.226976   47783 command_runner.go:130] >   "status": {
	I1213 08:47:58.226984   47783 command_runner.go:130] >     "conditions": [
	I1213 08:47:58.226989   47783 command_runner.go:130] >       {
	I1213 08:47:58.226993   47783 command_runner.go:130] >         "message": "",
	I1213 08:47:58.226997   47783 command_runner.go:130] >         "reason": "",
	I1213 08:47:58.227001   47783 command_runner.go:130] >         "status": true,
	I1213 08:47:58.227009   47783 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 08:47:58.227015   47783 command_runner.go:130] >       },
	I1213 08:47:58.227019   47783 command_runner.go:130] >       {
	I1213 08:47:58.227033   47783 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 08:47:58.227038   47783 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 08:47:58.227046   47783 command_runner.go:130] >         "status": false,
	I1213 08:47:58.227054   47783 command_runner.go:130] >         "type": "NetworkReady"
	I1213 08:47:58.227057   47783 command_runner.go:130] >       },
	I1213 08:47:58.227060   47783 command_runner.go:130] >       {
	I1213 08:47:58.227083   47783 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 08:47:58.227094   47783 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 08:47:58.227100   47783 command_runner.go:130] >         "status": false,
	I1213 08:47:58.227106   47783 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 08:47:58.227111   47783 command_runner.go:130] >       }
	I1213 08:47:58.227115   47783 command_runner.go:130] >     ]
	I1213 08:47:58.227118   47783 command_runner.go:130] >   }
	I1213 08:47:58.227121   47783 command_runner.go:130] > }
	I1213 08:47:58.229345   47783 cni.go:84] Creating CNI manager for ""
	I1213 08:47:58.229369   47783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:47:58.229387   47783 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:47:58.229409   47783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-074420 NodeName:functional-074420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:47:58.229527   47783 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-074420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:47:58.229596   47783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:47:58.237061   47783 command_runner.go:130] > kubeadm
	I1213 08:47:58.237081   47783 command_runner.go:130] > kubectl
	I1213 08:47:58.237086   47783 command_runner.go:130] > kubelet
	I1213 08:47:58.237099   47783 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:47:58.237151   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:47:58.244326   47783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 08:47:58.256951   47783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:47:58.269808   47783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 08:47:58.282145   47783 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:47:58.286872   47783 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 08:47:58.287376   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:58.410199   47783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:47:59.022103   47783 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420 for IP: 192.168.49.2
	I1213 08:47:59.022125   47783 certs.go:195] generating shared ca certs ...
	I1213 08:47:59.022141   47783 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.022352   47783 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 08:47:59.022424   47783 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 08:47:59.022444   47783 certs.go:257] generating profile certs ...
	I1213 08:47:59.022584   47783 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key
	I1213 08:47:59.022699   47783 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068
	I1213 08:47:59.022768   47783 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key
	I1213 08:47:59.022808   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 08:47:59.022855   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 08:47:59.022876   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 08:47:59.022904   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 08:47:59.022937   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 08:47:59.022973   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 08:47:59.022995   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 08:47:59.023008   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 08:47:59.023095   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 08:47:59.023154   47783 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 08:47:59.023166   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:47:59.023224   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 08:47:59.023288   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:47:59.023328   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 08:47:59.023408   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:47:59.023471   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem -> /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.023492   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.023541   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.024142   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:47:59.045491   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:47:59.066181   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:47:59.087256   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 08:47:59.105383   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:47:59.122457   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:47:59.141188   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:47:59.160057   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:47:59.177518   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 08:47:59.194757   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 08:47:59.211990   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:47:59.231728   47783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:47:59.244528   47783 ssh_runner.go:195] Run: openssl version
	I1213 08:47:59.250389   47783 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 08:47:59.250777   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.258690   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 08:47:59.266115   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269715   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269750   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269798   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.310445   47783 command_runner.go:130] > 51391683
	I1213 08:47:59.310954   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:47:59.318044   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.325154   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 08:47:59.332532   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336318   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336361   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336416   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.376950   47783 command_runner.go:130] > 3ec20f2e
	I1213 08:47:59.377430   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:47:59.384916   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.392420   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:47:59.399763   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403540   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403584   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403630   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.443918   47783 command_runner.go:130] > b5213941
	I1213 08:47:59.444419   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:47:59.451702   47783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:47:59.455380   47783 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:47:59.455462   47783 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 08:47:59.455488   47783 command_runner.go:130] > Device: 259,1	Inode: 1311318     Links: 1
	I1213 08:47:59.455502   47783 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:47:59.455526   47783 command_runner.go:130] > Access: 2025-12-13 08:43:51.909308195 +0000
	I1213 08:47:59.455533   47783 command_runner.go:130] > Modify: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455538   47783 command_runner.go:130] > Change: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455544   47783 command_runner.go:130] >  Birth: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455631   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:47:59.496226   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.496712   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:47:59.538384   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.538813   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:47:59.584114   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.584598   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:47:59.624635   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.625106   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:47:59.665474   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.665947   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:47:59.706066   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.706546   47783 kubeadm.go:401] StartCluster: {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:59.706648   47783 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 08:47:59.706732   47783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:47:59.735062   47783 cri.go:89] found id: ""
	I1213 08:47:59.735134   47783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:47:59.742080   47783 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 08:47:59.742103   47783 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 08:47:59.742110   47783 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 08:47:59.743039   47783 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:47:59.743056   47783 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:47:59.743123   47783 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:47:59.750746   47783 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:47:59.751192   47783 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-074420" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.751301   47783 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "functional-074420" cluster setting kubeconfig missing "functional-074420" context setting]
	I1213 08:47:59.751688   47783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.752162   47783 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.752336   47783 kapi.go:59] client config for functional-074420: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key", CAFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:47:59.752888   47783 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 08:47:59.752908   47783 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 08:47:59.752914   47783 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 08:47:59.752919   47783 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 08:47:59.752923   47783 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 08:47:59.753010   47783 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:47:59.753251   47783 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:47:59.761240   47783 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 08:47:59.761275   47783 kubeadm.go:602] duration metric: took 18.213538ms to restartPrimaryControlPlane
	I1213 08:47:59.761286   47783 kubeadm.go:403] duration metric: took 54.748002ms to StartCluster
	I1213 08:47:59.761334   47783 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.761412   47783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.762024   47783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.762236   47783 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 08:47:59.762588   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:59.762635   47783 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 08:47:59.762697   47783 addons.go:70] Setting storage-provisioner=true in profile "functional-074420"
	I1213 08:47:59.762710   47783 addons.go:239] Setting addon storage-provisioner=true in "functional-074420"
	I1213 08:47:59.762736   47783 host.go:66] Checking if "functional-074420" exists ...
	I1213 08:47:59.762848   47783 addons.go:70] Setting default-storageclass=true in profile "functional-074420"
	I1213 08:47:59.762897   47783 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-074420"
	I1213 08:47:59.763226   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.763230   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.768637   47783 out.go:179] * Verifying Kubernetes components...
	I1213 08:47:59.771460   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:59.801964   47783 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.802130   47783 kapi.go:59] client config for functional-074420: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key", CAFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:47:59.802416   47783 addons.go:239] Setting addon default-storageclass=true in "functional-074420"
	I1213 08:47:59.802452   47783 host.go:66] Checking if "functional-074420" exists ...
	I1213 08:47:59.802879   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.817615   47783 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:47:59.820407   47783 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:47:59.820438   47783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:47:59.820510   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:59.832904   47783 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:47:59.832927   47783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:47:59.832987   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:59.858620   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:59.867019   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:48:00.019931   47783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:48:00.079586   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:00.079699   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:00.772755   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.772781   47783 node_ready.go:35] waiting up to 6m0s for node "functional-074420" to be "Ready" ...
	W1213 08:48:00.772842   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.772962   47783 retry.go:31] will retry after 342.791424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773029   47783 type.go:168] "Request Body" body=""
	I1213 08:48:00.773112   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:00.773133   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773144   47783 retry.go:31] will retry after 244.896783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:00.773482   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.019052   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.079123   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.079165   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.079186   47783 retry.go:31] will retry after 233.412949ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.116509   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:01.177616   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.181525   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.181562   47783 retry.go:31] will retry after 544.217788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.273820   47783 type.go:168] "Request Body" body=""
	I1213 08:48:01.273908   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:01.274281   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.313528   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.373257   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.376997   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.377026   47783 retry.go:31] will retry after 483.901383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.726523   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:01.774029   47783 type.go:168] "Request Body" body=""
	I1213 08:48:01.774123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:01.774536   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.788802   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.792516   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.792575   47783 retry.go:31] will retry after 627.991267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.861830   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.921846   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.925982   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.926017   47783 retry.go:31] will retry after 1.103907842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.273073   47783 type.go:168] "Request Body" body=""
	I1213 08:48:02.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:02.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:02.420977   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:02.487960   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:02.491818   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.491849   47783 retry.go:31] will retry after 452.917795ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.773092   47783 type.go:168] "Request Body" body=""
	I1213 08:48:02.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:02.773507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:02.773572   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:02.945881   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:03.009201   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:03.013021   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.013052   47783 retry.go:31] will retry after 1.276929732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.030115   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:03.100586   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:03.104547   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.104578   47783 retry.go:31] will retry after 1.048810244s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.273922   47783 type.go:168] "Request Body" body=""
	I1213 08:48:03.274012   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:03.274318   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:03.773006   47783 type.go:168] "Request Body" body=""
	I1213 08:48:03.773078   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:03.773422   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:04.153636   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:04.212539   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:04.212608   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.212632   47783 retry.go:31] will retry after 1.498415757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.273795   47783 type.go:168] "Request Body" body=""
	I1213 08:48:04.273919   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:04.274275   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:04.290503   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:04.351966   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:04.352013   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.352031   47783 retry.go:31] will retry after 2.776026758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.773561   47783 type.go:168] "Request Body" body=""
	I1213 08:48:04.773631   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:04.773950   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:04.774040   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:05.273769   47783 type.go:168] "Request Body" body=""
	I1213 08:48:05.273843   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:05.274174   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:05.711960   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:05.773532   47783 type.go:168] "Request Body" body=""
	I1213 08:48:05.773602   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:05.773904   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:05.778452   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:05.778491   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:05.778510   47783 retry.go:31] will retry after 3.257875901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:06.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:48:06.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:06.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:06.773209   47783 type.go:168] "Request Body" body=""
	I1213 08:48:06.773292   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:06.773624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:07.129286   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:07.188224   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:07.188280   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:07.188301   47783 retry.go:31] will retry after 1.575099921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:07.273578   47783 type.go:168] "Request Body" body=""
	I1213 08:48:07.273669   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:07.273988   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:07.274044   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:07.773778   47783 type.go:168] "Request Body" body=""
	I1213 08:48:07.773852   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:07.774188   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.273837   47783 type.go:168] "Request Body" body=""
	I1213 08:48:08.273926   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:08.274179   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.763743   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:08.773132   47783 type.go:168] "Request Body" body=""
	I1213 08:48:08.773211   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:08.773479   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.823924   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:08.827716   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:08.827745   47783 retry.go:31] will retry after 4.082199617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.037077   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:09.107584   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:09.107627   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.107646   47783 retry.go:31] will retry after 4.733469164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.273965   47783 type.go:168] "Request Body" body=""
	I1213 08:48:09.274042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:09.274370   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:09.274422   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:09.773216   47783 type.go:168] "Request Body" body=""
	I1213 08:48:09.773289   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:09.773540   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:10.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:48:10.273139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:10.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:10.773111   47783 type.go:168] "Request Body" body=""
	I1213 08:48:10.773192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:10.773561   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:11.272986   47783 type.go:168] "Request Body" body=""
	I1213 08:48:11.273056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:11.273307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:11.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:48:11.773117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:11.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:11.773571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:12.273226   47783 type.go:168] "Request Body" body=""
	I1213 08:48:12.273318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:12.273600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:12.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:48:12.773101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:12.773360   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:12.910787   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:12.972202   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:12.972251   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:12.972270   47783 retry.go:31] will retry after 8.911795338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:13.273667   47783 type.go:168] "Request Body" body=""
	I1213 08:48:13.273742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:13.274062   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:13.773915   47783 type.go:168] "Request Body" body=""
	I1213 08:48:13.773987   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:13.774307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:13.774364   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:13.841699   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:13.900246   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:13.900294   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:13.900313   47783 retry.go:31] will retry after 6.419298699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:14.273688   47783 type.go:168] "Request Body" body=""
	I1213 08:48:14.273763   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:14.274022   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:14.773814   47783 type.go:168] "Request Body" body=""
	I1213 08:48:14.773891   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:14.774197   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:15.273923   47783 type.go:168] "Request Body" body=""
	I1213 08:48:15.273993   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:15.274354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:15.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:48:15.773092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:15.773436   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:16.273052   47783 type.go:168] "Request Body" body=""
	I1213 08:48:16.273127   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:16.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:16.273499   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:16.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:48:16.773162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:16.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:17.272982   47783 type.go:168] "Request Body" body=""
	I1213 08:48:17.273050   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:17.273294   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:17.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:48:17.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:17.773414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:18.273126   47783 type.go:168] "Request Body" body=""
	I1213 08:48:18.273210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:18.273554   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:18.273611   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:18.773038   47783 type.go:168] "Request Body" body=""
	I1213 08:48:18.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:18.773365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:19.273077   47783 type.go:168] "Request Body" body=""
	I1213 08:48:19.273151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:19.273502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:19.773409   47783 type.go:168] "Request Body" body=""
	I1213 08:48:19.773482   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:19.773764   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:20.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:48:20.273167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:20.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:20.320652   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:20.382818   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:20.382863   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:20.382884   47783 retry.go:31] will retry after 5.774410243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:20.773290   47783 type.go:168] "Request Body" body=""
	I1213 08:48:20.773364   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:20.773699   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:20.773754   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:21.273419   47783 type.go:168] "Request Body" body=""
	I1213 08:48:21.273508   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:21.273838   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:21.773521   47783 type.go:168] "Request Body" body=""
	I1213 08:48:21.773588   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:21.773835   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:21.885194   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:21.947231   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:21.947284   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:21.947318   47783 retry.go:31] will retry after 10.220008645s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:22.273767   47783 type.go:168] "Request Body" body=""
	I1213 08:48:22.273840   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:22.274159   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:22.773949   47783 type.go:168] "Request Body" body=""
	I1213 08:48:22.774022   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:22.774282   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:22.774333   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:23.273005   47783 type.go:168] "Request Body" body=""
	I1213 08:48:23.273085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:23.273357   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:23.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:48:23.773140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:23.773454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:24.273095   47783 type.go:168] "Request Body" body=""
	I1213 08:48:24.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:24.273534   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:24.773021   47783 type.go:168] "Request Body" body=""
	I1213 08:48:24.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:24.773394   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:25.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:48:25.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:25.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:25.273549   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:25.773403   47783 type.go:168] "Request Body" body=""
	I1213 08:48:25.773494   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:25.773798   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:26.158458   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:26.211883   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:26.215285   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:26.215316   47783 retry.go:31] will retry after 15.443420543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:26.273497   47783 type.go:168] "Request Body" body=""
	I1213 08:48:26.273568   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:26.273871   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:26.773647   47783 type.go:168] "Request Body" body=""
	I1213 08:48:26.773724   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:26.774089   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:27.273764   47783 type.go:168] "Request Body" body=""
	I1213 08:48:27.273848   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:27.274199   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:27.274258   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:27.773967   47783 type.go:168] "Request Body" body=""
	I1213 08:48:27.774040   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:27.774313   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:28.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:48:28.273130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:28.273472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:28.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:48:28.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:28.773511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:29.272971   47783 type.go:168] "Request Body" body=""
	I1213 08:48:29.273039   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:29.273299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:29.773229   47783 type.go:168] "Request Body" body=""
	I1213 08:48:29.773303   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:29.773634   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:29.773690   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:30.273344   47783 type.go:168] "Request Body" body=""
	I1213 08:48:30.273424   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:30.273761   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:30.773498   47783 type.go:168] "Request Body" body=""
	I1213 08:48:30.773573   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:30.773910   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:31.273704   47783 type.go:168] "Request Body" body=""
	I1213 08:48:31.273780   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:31.274114   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:31.773932   47783 type.go:168] "Request Body" body=""
	I1213 08:48:31.774003   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:31.774336   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:31.774389   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:32.167590   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:32.226722   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:32.226762   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:32.226781   47783 retry.go:31] will retry after 8.254164246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:32.273897   47783 type.go:168] "Request Body" body=""
	I1213 08:48:32.273972   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:32.274230   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:32.772997   47783 type.go:168] "Request Body" body=""
	I1213 08:48:32.773100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:32.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:33.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:48:33.273177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:33.273513   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:33.773160   47783 type.go:168] "Request Body" body=""
	I1213 08:48:33.773250   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:33.773621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:34.273105   47783 type.go:168] "Request Body" body=""
	I1213 08:48:34.273213   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:34.273537   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:34.273589   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:34.773220   47783 type.go:168] "Request Body" body=""
	I1213 08:48:34.773295   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:34.773625   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:35.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:48:35.273116   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:35.273367   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:35.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:48:35.773150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:35.773476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:36.273986   47783 type.go:168] "Request Body" body=""
	I1213 08:48:36.274079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:36.274424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:36.274479   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:36.773035   47783 type.go:168] "Request Body" body=""
	I1213 08:48:36.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:36.773358   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:37.273041   47783 type.go:168] "Request Body" body=""
	I1213 08:48:37.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:37.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:37.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:48:37.773130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:37.773446   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:38.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:48:38.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:38.273423   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:38.773129   47783 type.go:168] "Request Body" body=""
	I1213 08:48:38.773205   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:38.773535   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:38.773618   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:39.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:48:39.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:39.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:39.773505   47783 type.go:168] "Request Body" body=""
	I1213 08:48:39.773593   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:39.773873   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:40.273652   47783 type.go:168] "Request Body" body=""
	I1213 08:48:40.273731   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:40.274095   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:40.481720   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:40.548346   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:40.548381   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:40.548399   47783 retry.go:31] will retry after 23.072803829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:40.773873   47783 type.go:168] "Request Body" body=""
	I1213 08:48:40.773944   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:40.774217   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:40.774266   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:41.273996   47783 type.go:168] "Request Body" body=""
	I1213 08:48:41.274066   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:41.274319   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:41.658979   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:41.720805   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:41.720849   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:41.720869   47783 retry.go:31] will retry after 14.236359641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:41.774005   47783 type.go:168] "Request Body" body=""
	I1213 08:48:41.774085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:41.774430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:42.273146   47783 type.go:168] "Request Body" body=""
	I1213 08:48:42.273232   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:42.273601   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:42.773152   47783 type.go:168] "Request Body" body=""
	I1213 08:48:42.773219   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:42.773482   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:43.273054   47783 type.go:168] "Request Body" body=""
	I1213 08:48:43.273159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:43.273484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:43.273541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:43.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:48:43.773275   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:43.773578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:44.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:48:44.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:44.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:44.773108   47783 type.go:168] "Request Body" body=""
	I1213 08:48:44.773180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:44.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:45.273251   47783 type.go:168] "Request Body" body=""
	I1213 08:48:45.273329   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:45.273651   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:45.273709   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:45.773677   47783 type.go:168] "Request Body" body=""
	I1213 08:48:45.773758   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:45.774018   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:46.273828   47783 type.go:168] "Request Body" body=""
	I1213 08:48:46.273897   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:46.274229   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:46.772972   47783 type.go:168] "Request Body" body=""
	I1213 08:48:46.773043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:46.773362   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:47.273045   47783 type.go:168] "Request Body" body=""
	I1213 08:48:47.273126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:47.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:47.773139   47783 type.go:168] "Request Body" body=""
	I1213 08:48:47.773219   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:47.773579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:47.773642   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:48.273287   47783 type.go:168] "Request Body" body=""
	I1213 08:48:48.273371   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:48.273677   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:48.773341   47783 type.go:168] "Request Body" body=""
	I1213 08:48:48.773416   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:48.773680   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:49.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:48:49.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:49.273506   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:49.773247   47783 type.go:168] "Request Body" body=""
	I1213 08:48:49.773323   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:49.773653   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:49.773705   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:50.273353   47783 type.go:168] "Request Body" body=""
	I1213 08:48:50.273427   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:50.273741   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:50.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:48:50.773764   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:50.774083   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:51.273888   47783 type.go:168] "Request Body" body=""
	I1213 08:48:51.273963   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:51.274309   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:51.773836   47783 type.go:168] "Request Body" body=""
	I1213 08:48:51.773911   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:51.774215   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:51.774264   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:52.273985   47783 type.go:168] "Request Body" body=""
	I1213 08:48:52.274074   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:52.274430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:52.773081   47783 type.go:168] "Request Body" body=""
	I1213 08:48:52.773168   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:52.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:53.273059   47783 type.go:168] "Request Body" body=""
	I1213 08:48:53.273129   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:53.273441   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:53.773125   47783 type.go:168] "Request Body" body=""
	I1213 08:48:53.773200   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:53.773519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:54.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:48:54.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:54.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:54.273482   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:54.773027   47783 type.go:168] "Request Body" body=""
	I1213 08:48:54.773104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:54.773358   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.273041   47783 type.go:168] "Request Body" body=""
	I1213 08:48:55.273121   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:55.273415   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.773058   47783 type.go:168] "Request Body" body=""
	I1213 08:48:55.773138   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:55.773419   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.957869   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:56.020865   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:56.020923   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:56.020946   47783 retry.go:31] will retry after 43.666748427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:56.273019   47783 type.go:168] "Request Body" body=""
	I1213 08:48:56.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:56.273421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:56.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:48:56.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:56.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:56.773548   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:57.273202   47783 type.go:168] "Request Body" body=""
	I1213 08:48:57.273273   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:57.273598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:57.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:48:57.773100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:57.773380   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:58.273085   47783 type.go:168] "Request Body" body=""
	I1213 08:48:58.273180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:58.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:58.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:48:58.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:58.773607   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:58.773665   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:59.273927   47783 type.go:168] "Request Body" body=""
	I1213 08:48:59.273999   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:59.274264   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:59.773208   47783 type.go:168] "Request Body" body=""
	I1213 08:48:59.773283   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:59.773635   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:00.273263   47783 type.go:168] "Request Body" body=""
	I1213 08:49:00.273369   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:00.273817   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:00.773836   47783 type.go:168] "Request Body" body=""
	I1213 08:49:00.773908   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:00.774222   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:00.774277   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:01.274021   47783 type.go:168] "Request Body" body=""
	I1213 08:49:01.274100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:01.274424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:01.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:49:01.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:01.773375   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:02.272965   47783 type.go:168] "Request Body" body=""
	I1213 08:49:02.273041   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:02.273365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:02.773094   47783 type.go:168] "Request Body" body=""
	I1213 08:49:02.773195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:02.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:03.273064   47783 type.go:168] "Request Body" body=""
	I1213 08:49:03.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:03.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:03.273512   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:03.622173   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:49:03.678608   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:03.682133   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:49:03.682162   47783 retry.go:31] will retry after 22.66884586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:49:03.773432   47783 type.go:168] "Request Body" body=""
	I1213 08:49:03.773502   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:03.773795   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:04.273439   47783 type.go:168] "Request Body" body=""
	I1213 08:49:04.273517   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:04.273868   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:04.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:49:04.773210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:04.773461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:05.273151   47783 type.go:168] "Request Body" body=""
	I1213 08:49:05.273221   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:05.273546   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:05.273657   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:05.773252   47783 type.go:168] "Request Body" body=""
	I1213 08:49:05.773325   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:05.773682   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:06.273974   47783 type.go:168] "Request Body" body=""
	I1213 08:49:06.274043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:06.274354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:06.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:49:06.773140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:06.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:07.273190   47783 type.go:168] "Request Body" body=""
	I1213 08:49:07.273272   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:07.273599   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:07.773017   47783 type.go:168] "Request Body" body=""
	I1213 08:49:07.773084   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:07.773327   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:07.773371   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:08.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:49:08.273139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:08.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:08.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:49:08.773269   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:08.773582   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:09.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:49:09.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:09.273483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:09.773186   47783 type.go:168] "Request Body" body=""
	I1213 08:49:09.773263   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:09.773596   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:09.773650   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:10.273086   47783 type.go:168] "Request Body" body=""
	I1213 08:49:10.273160   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:10.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:10.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:49:10.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:10.773368   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:11.273030   47783 type.go:168] "Request Body" body=""
	I1213 08:49:11.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:11.273462   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:11.773903   47783 type.go:168] "Request Body" body=""
	I1213 08:49:11.773985   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:11.774302   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:11.774358   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:12.273000   47783 type.go:168] "Request Body" body=""
	I1213 08:49:12.273072   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:12.273375   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:12.773077   47783 type.go:168] "Request Body" body=""
	I1213 08:49:12.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:12.773483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:13.273179   47783 type.go:168] "Request Body" body=""
	I1213 08:49:13.273252   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:13.273585   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:13.773267   47783 type.go:168] "Request Body" body=""
	I1213 08:49:13.773337   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:13.773602   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:14.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:49:14.273179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:14.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:14.273573   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:14.773254   47783 type.go:168] "Request Body" body=""
	I1213 08:49:14.773326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:14.773613   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:15.272991   47783 type.go:168] "Request Body" body=""
	I1213 08:49:15.273071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:15.273379   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:15.773114   47783 type.go:168] "Request Body" body=""
	I1213 08:49:15.773190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:15.773537   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:16.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:49:16.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:16.273493   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:16.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:49:16.773105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:16.773399   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:16.773452   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:17.272977   47783 type.go:168] "Request Body" body=""
	I1213 08:49:17.273056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:17.273386   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:17.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:49:17.773156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:17.773469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:18.273029   47783 type.go:168] "Request Body" body=""
	I1213 08:49:18.273098   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:18.273417   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:18.773104   47783 type.go:168] "Request Body" body=""
	I1213 08:49:18.773178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:18.773501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:18.773552   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:19.273076   47783 type.go:168] "Request Body" body=""
	I1213 08:49:19.273154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:19.273462   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:19.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:49:19.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:19.773602   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:20.273287   47783 type.go:168] "Request Body" body=""
	I1213 08:49:20.273359   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:20.273692   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:20.773504   47783 type.go:168] "Request Body" body=""
	I1213 08:49:20.773579   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:20.773879   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:20.773925   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:21.273640   47783 type.go:168] "Request Body" body=""
	I1213 08:49:21.273706   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:21.273955   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:21.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:49:21.773767   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:21.774073   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:22.273883   47783 type.go:168] "Request Body" body=""
	I1213 08:49:22.273954   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:22.274296   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:22.772994   47783 type.go:168] "Request Body" body=""
	I1213 08:49:22.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:22.773314   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:23.272983   47783 type.go:168] "Request Body" body=""
	I1213 08:49:23.273054   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:23.273387   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:23.273444   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:23.773118   47783 type.go:168] "Request Body" body=""
	I1213 08:49:23.773191   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:23.773501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:24.273058   47783 type.go:168] "Request Body" body=""
	I1213 08:49:24.273141   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:24.273459   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:24.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:49:24.773148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:24.773494   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:25.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:49:25.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:25.273476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:25.273529   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:25.773465   47783 type.go:168] "Request Body" body=""
	I1213 08:49:25.773532   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:25.773794   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:26.273591   47783 type.go:168] "Request Body" body=""
	I1213 08:49:26.273658   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:26.273978   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:26.351410   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:49:26.407043   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:26.410457   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:26.410551   47783 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:49:26.773965   47783 type.go:168] "Request Body" body=""
	I1213 08:49:26.774065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:26.774374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:27.272953   47783 type.go:168] "Request Body" body=""
	I1213 08:49:27.273025   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:27.273280   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:27.774019   47783 type.go:168] "Request Body" body=""
	I1213 08:49:27.774142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:27.774488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:27.774540   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:28.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:49:28.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:28.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:28.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:49:28.773153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:28.773437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:29.273108   47783 type.go:168] "Request Body" body=""
	I1213 08:49:29.273179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:29.273507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:29.773273   47783 type.go:168] "Request Body" body=""
	I1213 08:49:29.773361   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:29.773684   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:30.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:49:30.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:30.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:30.273487   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:30.773081   47783 type.go:168] "Request Body" body=""
	I1213 08:49:30.773180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:30.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:31.273215   47783 type.go:168] "Request Body" body=""
	I1213 08:49:31.273285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:31.273560   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:31.773891   47783 type.go:168] "Request Body" body=""
	I1213 08:49:31.773957   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:31.774211   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:32.274000   47783 type.go:168] "Request Body" body=""
	I1213 08:49:32.274087   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:32.274384   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:32.274426   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:32.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:49:32.773174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:32.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:33.273961   47783 type.go:168] "Request Body" body=""
	I1213 08:49:33.274039   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:33.274308   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:33.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:49:33.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:33.773455   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:34.273150   47783 type.go:168] "Request Body" body=""
	I1213 08:49:34.273226   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:34.273548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:34.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:49:34.773097   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:34.773360   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:34.773410   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:35.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:49:35.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:35.273467   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:35.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:49:35.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:35.773443   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:36.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:49:36.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:36.273468   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:36.773159   47783 type.go:168] "Request Body" body=""
	I1213 08:49:36.773245   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:36.773626   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:36.773678   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:37.273354   47783 type.go:168] "Request Body" body=""
	I1213 08:49:37.273429   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:37.273763   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:37.773002   47783 type.go:168] "Request Body" body=""
	I1213 08:49:37.773079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:37.773333   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:38.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:49:38.273122   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:38.273453   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:38.773090   47783 type.go:168] "Request Body" body=""
	I1213 08:49:38.773163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:38.773490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:39.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:49:39.273155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:39.273480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:39.273532   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:39.687941   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:49:39.742570   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:39.746037   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:39.746134   47783 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:49:39.749234   47783 out.go:179] * Enabled addons: 
	I1213 08:49:39.751225   47783 addons.go:530] duration metric: took 1m39.988589749s for enable addons: enabled=[]
	I1213 08:49:39.773343   47783 type.go:168] "Request Body" body=""
	I1213 08:49:39.773416   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:39.773726   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:40.273095   47783 type.go:168] "Request Body" body=""
	I1213 08:49:40.273171   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:40.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:40.773067   47783 type.go:168] "Request Body" body=""
	I1213 08:49:40.773147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:40.773426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:41.273150   47783 type.go:168] "Request Body" body=""
	I1213 08:49:41.273226   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:41.273552   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:41.273610   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:41.773119   47783 type.go:168] "Request Body" body=""
	I1213 08:49:41.773205   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:41.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:42.278526   47783 type.go:168] "Request Body" body=""
	I1213 08:49:42.278597   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:42.279069   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:42.773238   47783 type.go:168] "Request Body" body=""
	I1213 08:49:42.773329   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:42.773690   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:43.273405   47783 type.go:168] "Request Body" body=""
	I1213 08:49:43.273484   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:43.273806   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:43.273861   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:43.773568   47783 type.go:168] "Request Body" body=""
	I1213 08:49:43.773638   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:43.773886   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:44.273754   47783 type.go:168] "Request Body" body=""
	I1213 08:49:44.273826   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:44.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:44.773918   47783 type.go:168] "Request Body" body=""
	I1213 08:49:44.773997   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:44.774334   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:45.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:49:45.273137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:45.273511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:45.773242   47783 type.go:168] "Request Body" body=""
	I1213 08:49:45.773316   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:45.773611   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:45.773658   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:46.273330   47783 type.go:168] "Request Body" body=""
	I1213 08:49:46.273412   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:46.273751   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:46.773437   47783 type.go:168] "Request Body" body=""
	I1213 08:49:46.773504   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:46.773768   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:47.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:49:47.273144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:47.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:47.773097   47783 type.go:168] "Request Body" body=""
	I1213 08:49:47.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:47.773505   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:48.273049   47783 type.go:168] "Request Body" body=""
	I1213 08:49:48.273125   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:48.273438   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:48.273501   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:48.773147   47783 type.go:168] "Request Body" body=""
	I1213 08:49:48.773223   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:48.773560   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:49.273284   47783 type.go:168] "Request Body" body=""
	I1213 08:49:49.273357   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:49.273664   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:49.773223   47783 type.go:168] "Request Body" body=""
	I1213 08:49:49.773311   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:49.773649   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:50.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:49:50.273201   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:50.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:50.273544   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:50.773094   47783 type.go:168] "Request Body" body=""
	I1213 08:49:50.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:50.773510   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:51.273186   47783 type.go:168] "Request Body" body=""
	I1213 08:49:51.273280   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:51.273547   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:51.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:49:51.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:51.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:52.273068   47783 type.go:168] "Request Body" body=""
	I1213 08:49:52.273164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:52.273473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:52.773028   47783 type.go:168] "Request Body" body=""
	I1213 08:49:52.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:52.773409   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:52.773465   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:53.273131   47783 type.go:168] "Request Body" body=""
	I1213 08:49:53.273206   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:53.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:53.773165   47783 type.go:168] "Request Body" body=""
	I1213 08:49:53.773244   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:53.773580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:54.273021   47783 type.go:168] "Request Body" body=""
	I1213 08:49:54.273089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:54.273345   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:54.773054   47783 type.go:168] "Request Body" body=""
	I1213 08:49:54.773137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:54.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:54.773578   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:55.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:49:55.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:55.273699   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:55.773529   47783 type.go:168] "Request Body" body=""
	I1213 08:49:55.773602   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:55.773850   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:56.273641   47783 type.go:168] "Request Body" body=""
	I1213 08:49:56.273732   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:56.274042   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:56.773818   47783 type.go:168] "Request Body" body=""
	I1213 08:49:56.773897   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:56.774220   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:56.774270   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:57.273975   47783 type.go:168] "Request Body" body=""
	I1213 08:49:57.274045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:57.274295   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:57.772996   47783 type.go:168] "Request Body" body=""
	I1213 08:49:57.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:57.773405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:58.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:49:58.273173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:58.273494   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:58.773156   47783 type.go:168] "Request Body" body=""
	I1213 08:49:58.773264   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:58.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:59.273323   47783 type.go:168] "Request Body" body=""
	I1213 08:49:59.273404   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:59.273727   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:59.273784   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:59.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:49:59.773775   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:59.774108   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:00.297017   47783 type.go:168] "Request Body" body=""
	I1213 08:50:00.297166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:00.297535   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:00.773617   47783 type.go:168] "Request Body" body=""
	I1213 08:50:00.773751   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:00.774118   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:01.273864   47783 type.go:168] "Request Body" body=""
	I1213 08:50:01.273935   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:01.274197   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:01.274238   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:01.773963   47783 type.go:168] "Request Body" body=""
	I1213 08:50:01.774042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:01.774389   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:02.273108   47783 type.go:168] "Request Body" body=""
	I1213 08:50:02.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:02.273568   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:02.773279   47783 type.go:168] "Request Body" body=""
	I1213 08:50:02.773363   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:02.773686   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:03.273398   47783 type.go:168] "Request Body" body=""
	I1213 08:50:03.273474   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:03.273787   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:03.773099   47783 type.go:168] "Request Body" body=""
	I1213 08:50:03.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:03.773516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:03.773571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:04.273225   47783 type.go:168] "Request Body" body=""
	I1213 08:50:04.273313   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:04.273579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:04.773096   47783 type.go:168] "Request Body" body=""
	I1213 08:50:04.773170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:04.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:05.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:50:05.273153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:05.273466   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:05.773045   47783 type.go:168] "Request Body" body=""
	I1213 08:50:05.773131   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:05.773429   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:06.273084   47783 type.go:168] "Request Body" body=""
	I1213 08:50:06.273161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:06.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:06.273553   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:06.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:50:06.773112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:06.773416   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:07.272971   47783 type.go:168] "Request Body" body=""
	I1213 08:50:07.273045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:07.273298   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:07.773048   47783 type.go:168] "Request Body" body=""
	I1213 08:50:07.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:07.773427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:08.273096   47783 type.go:168] "Request Body" body=""
	I1213 08:50:08.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:08.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:08.773055   47783 type.go:168] "Request Body" body=""
	I1213 08:50:08.773142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:08.773458   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:08.773523   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:09.273174   47783 type.go:168] "Request Body" body=""
	I1213 08:50:09.273248   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:09.273572   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:09.773553   47783 type.go:168] "Request Body" body=""
	I1213 08:50:09.773632   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:09.773992   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:10.273723   47783 type.go:168] "Request Body" body=""
	I1213 08:50:10.273814   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:10.274202   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:10.773083   47783 type.go:168] "Request Body" body=""
	I1213 08:50:10.773168   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:10.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:10.773551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:11.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:50:11.273271   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:11.273600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:11.772991   47783 type.go:168] "Request Body" body=""
	I1213 08:50:11.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:11.773355   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:12.273063   47783 type.go:168] "Request Body" body=""
	I1213 08:50:12.273145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:12.273477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:12.773181   47783 type.go:168] "Request Body" body=""
	I1213 08:50:12.773261   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:12.773608   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:12.773666   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:13.273258   47783 type.go:168] "Request Body" body=""
	I1213 08:50:13.273334   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:13.273612   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:13.773147   47783 type.go:168] "Request Body" body=""
	I1213 08:50:13.773220   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:13.773548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:14.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:50:14.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:14.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:14.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:50:14.773121   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:14.773402   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:15.273165   47783 type.go:168] "Request Body" body=""
	I1213 08:50:15.273259   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:15.273603   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:15.273662   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:15.773104   47783 type.go:168] "Request Body" body=""
	I1213 08:50:15.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:15.773492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:16.273005   47783 type.go:168] "Request Body" body=""
	I1213 08:50:16.273080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:16.273324   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:16.773018   47783 type.go:168] "Request Body" body=""
	I1213 08:50:16.773092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:16.773427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:17.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:17.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:17.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:17.773161   47783 type.go:168] "Request Body" body=""
	I1213 08:50:17.773232   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:17.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:17.773616   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:18.273100   47783 type.go:168] "Request Body" body=""
	I1213 08:50:18.273173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:18.273458   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:18.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:50:18.773177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:18.773525   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:19.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:50:19.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:19.273451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:19.773113   47783 type.go:168] "Request Body" body=""
	I1213 08:50:19.773187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:19.773533   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:20.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:50:20.273111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:20.273432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:20.273484   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:20.773059   47783 type.go:168] "Request Body" body=""
	I1213 08:50:20.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:20.773472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:21.273123   47783 type.go:168] "Request Body" body=""
	I1213 08:50:21.273195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:21.273492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:21.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:50:21.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:21.773514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:22.273055   47783 type.go:168] "Request Body" body=""
	I1213 08:50:22.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:22.273427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:22.773171   47783 type.go:168] "Request Body" body=""
	I1213 08:50:22.773255   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:22.773587   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:22.773638   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:23.273116   47783 type.go:168] "Request Body" body=""
	I1213 08:50:23.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:23.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:23.773176   47783 type.go:168] "Request Body" body=""
	I1213 08:50:23.773242   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:23.773565   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:24.273113   47783 type.go:168] "Request Body" body=""
	I1213 08:50:24.273208   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:24.273526   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:24.773266   47783 type.go:168] "Request Body" body=""
	I1213 08:50:24.773335   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:24.773645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:24.773691   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:25.273185   47783 type.go:168] "Request Body" body=""
	I1213 08:50:25.273256   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:25.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:25.773486   47783 type.go:168] "Request Body" body=""
	I1213 08:50:25.773580   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:25.773863   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:26.273638   47783 type.go:168] "Request Body" body=""
	I1213 08:50:26.273722   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:26.274046   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:26.773772   47783 type.go:168] "Request Body" body=""
	I1213 08:50:26.773849   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:26.774110   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:26.774158   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:27.273949   47783 type.go:168] "Request Body" body=""
	I1213 08:50:27.274035   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:27.274426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:27.773079   47783 type.go:168] "Request Body" body=""
	I1213 08:50:27.773157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:27.773492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:28.273060   47783 type.go:168] "Request Body" body=""
	I1213 08:50:28.273130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:28.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:28.773138   47783 type.go:168] "Request Body" body=""
	I1213 08:50:28.773213   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:28.773534   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:29.273243   47783 type.go:168] "Request Body" body=""
	I1213 08:50:29.273332   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:29.273667   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:29.273721   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:29.773419   47783 type.go:168] "Request Body" body=""
	I1213 08:50:29.773492   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:29.773757   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:30.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:50:30.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:30.273528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:30.773268   47783 type.go:168] "Request Body" body=""
	I1213 08:50:30.773342   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:30.773681   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:31.273053   47783 type.go:168] "Request Body" body=""
	I1213 08:50:31.273131   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:31.273450   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:31.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:50:31.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:31.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:31.773554   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:32.273178   47783 type.go:168] "Request Body" body=""
	I1213 08:50:32.273254   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:32.273550   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:32.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:50:32.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:32.773420   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:33.273126   47783 type.go:168] "Request Body" body=""
	I1213 08:50:33.273197   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:33.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:33.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:50:33.773311   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:33.773638   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:33.773693   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:34.273120   47783 type.go:168] "Request Body" body=""
	I1213 08:50:34.273192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:34.273450   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:34.773092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:34.773162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:34.773483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:35.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:50:35.273285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:35.273659   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:35.773467   47783 type.go:168] "Request Body" body=""
	I1213 08:50:35.773539   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:35.773795   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:35.773847   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:36.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:36.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:36.273501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:36.773198   47783 type.go:168] "Request Body" body=""
	I1213 08:50:36.773272   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:36.773631   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:37.273322   47783 type.go:168] "Request Body" body=""
	I1213 08:50:37.273394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:37.273649   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:37.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:50:37.773139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:37.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:38.273158   47783 type.go:168] "Request Body" body=""
	I1213 08:50:38.273235   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:38.273563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:38.273617   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:38.773952   47783 type.go:168] "Request Body" body=""
	I1213 08:50:38.774033   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:38.774277   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:39.272962   47783 type.go:168] "Request Body" body=""
	I1213 08:50:39.273042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:39.273376   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:39.773154   47783 type.go:168] "Request Body" body=""
	I1213 08:50:39.773228   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:39.773559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:40.273228   47783 type.go:168] "Request Body" body=""
	I1213 08:50:40.273301   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:40.273580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:40.773572   47783 type.go:168] "Request Body" body=""
	I1213 08:50:40.773643   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:40.773972   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:40.774022   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:41.273745   47783 type.go:168] "Request Body" body=""
	I1213 08:50:41.273822   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:41.274145   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:41.773824   47783 type.go:168] "Request Body" body=""
	I1213 08:50:41.773902   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:41.774153   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:42.273992   47783 type.go:168] "Request Body" body=""
	I1213 08:50:42.274071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:42.274419   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:42.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:50:42.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:42.773457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:43.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:50:43.273175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:43.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:43.273477   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:43.773102   47783 type.go:168] "Request Body" body=""
	I1213 08:50:43.773177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:43.773521   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:44.273220   47783 type.go:168] "Request Body" body=""
	I1213 08:50:44.273315   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:44.273660   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:44.773011   47783 type.go:168] "Request Body" body=""
	I1213 08:50:44.773097   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:44.773359   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:45.273045   47783 type.go:168] "Request Body" body=""
	I1213 08:50:45.273133   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:45.273474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:45.273530   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:45.773107   47783 type.go:168] "Request Body" body=""
	I1213 08:50:45.773184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:45.773544   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:46.273227   47783 type.go:168] "Request Body" body=""
	I1213 08:50:46.273297   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:46.273559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:46.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:50:46.773135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:46.773469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:47.273151   47783 type.go:168] "Request Body" body=""
	I1213 08:50:47.273234   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:47.273574   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:47.273628   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:47.773031   47783 type.go:168] "Request Body" body=""
	I1213 08:50:47.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:47.773365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:48.273067   47783 type.go:168] "Request Body" body=""
	I1213 08:50:48.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:48.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:48.773210   47783 type.go:168] "Request Body" body=""
	I1213 08:50:48.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:48.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:49.272997   47783 type.go:168] "Request Body" body=""
	I1213 08:50:49.273065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:49.273322   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:49.773187   47783 type.go:168] "Request Body" body=""
	I1213 08:50:49.773256   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:49.773595   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:49.773649   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:50.273317   47783 type.go:168] "Request Body" body=""
	I1213 08:50:50.273391   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:50.273716   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:50.773448   47783 type.go:168] "Request Body" body=""
	I1213 08:50:50.773521   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:50.773785   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:51.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:50:51.273153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:51.273469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:51.773185   47783 type.go:168] "Request Body" body=""
	I1213 08:50:51.773266   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:51.773606   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:52.273034   47783 type.go:168] "Request Body" body=""
	I1213 08:50:52.273104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:52.273399   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:52.273451   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:52.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:50:52.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:52.773490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:53.273950   47783 type.go:168] "Request Body" body=""
	I1213 08:50:53.274028   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:53.274337   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:53.772992   47783 type.go:168] "Request Body" body=""
	I1213 08:50:53.773065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:53.773378   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:54.272967   47783 type.go:168] "Request Body" body=""
	I1213 08:50:54.273044   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:54.273396   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:54.773129   47783 type.go:168] "Request Body" body=""
	I1213 08:50:54.773203   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:54.773528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:54.773588   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:55.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:50:55.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:55.273415   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:55.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:50:55.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:55.773463   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:56.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:50:56.273156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:56.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:56.772978   47783 type.go:168] "Request Body" body=""
	I1213 08:50:56.773045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:56.773290   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:57.272974   47783 type.go:168] "Request Body" body=""
	I1213 08:50:57.273048   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:57.273374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:57.273428   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:57.773002   47783 type.go:168] "Request Body" body=""
	I1213 08:50:57.773088   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:57.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:58.273022   47783 type.go:168] "Request Body" body=""
	I1213 08:50:58.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:58.273391   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:58.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:50:58.773196   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:58.773519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:59.273246   47783 type.go:168] "Request Body" body=""
	I1213 08:50:59.273324   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:59.273653   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:59.273705   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:59.773429   47783 type.go:168] "Request Body" body=""
	I1213 08:50:59.773497   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:59.773750   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:00.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:51:00.273284   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:00.273609   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:00.773677   47783 type.go:168] "Request Body" body=""
	I1213 08:51:00.773767   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:00.774124   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:01.273906   47783 type.go:168] "Request Body" body=""
	I1213 08:51:01.273981   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:01.274248   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:01.274298   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:01.772982   47783 type.go:168] "Request Body" body=""
	I1213 08:51:01.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:01.773396   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:02.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:51:02.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:02.273510   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:02.773051   47783 type.go:168] "Request Body" body=""
	I1213 08:51:02.773122   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:02.773441   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:03.273077   47783 type.go:168] "Request Body" body=""
	I1213 08:51:03.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:03.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:03.773197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:03.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:03.773610   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:03.773666   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:04.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:51:04.273145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:04.273469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:04.773150   47783 type.go:168] "Request Body" body=""
	I1213 08:51:04.773225   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:04.773542   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:05.273267   47783 type.go:168] "Request Body" body=""
	I1213 08:51:05.273342   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:05.273694   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:05.773534   47783 type.go:168] "Request Body" body=""
	I1213 08:51:05.773600   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:05.773861   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:05.773901   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:06.273627   47783 type.go:168] "Request Body" body=""
	I1213 08:51:06.273698   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:06.273995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:06.773786   47783 type.go:168] "Request Body" body=""
	I1213 08:51:06.773858   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:06.774165   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:07.273877   47783 type.go:168] "Request Body" body=""
	I1213 08:51:07.273959   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:07.274221   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:07.774001   47783 type.go:168] "Request Body" body=""
	I1213 08:51:07.774080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:07.774408   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:07.774461   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:08.273096   47783 type.go:168] "Request Body" body=""
	I1213 08:51:08.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:08.273511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:08.773065   47783 type.go:168] "Request Body" body=""
	I1213 08:51:08.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:08.773665   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:09.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:51:09.273148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:09.273474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:09.773196   47783 type.go:168] "Request Body" body=""
	I1213 08:51:09.773269   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:09.773600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:10.273256   47783 type.go:168] "Request Body" body=""
	I1213 08:51:10.273325   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:10.273572   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:10.273611   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:10.773469   47783 type.go:168] "Request Body" body=""
	I1213 08:51:10.773540   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:10.773888   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:11.273704   47783 type.go:168] "Request Body" body=""
	I1213 08:51:11.273777   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:11.274082   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:11.773860   47783 type.go:168] "Request Body" body=""
	I1213 08:51:11.773926   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:11.774166   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:12.273909   47783 type.go:168] "Request Body" body=""
	I1213 08:51:12.273991   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:12.274305   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:12.274363   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:12.773010   47783 type.go:168] "Request Body" body=""
	I1213 08:51:12.773086   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:12.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:13.272974   47783 type.go:168] "Request Body" body=""
	I1213 08:51:13.273048   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:13.273299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:13.774032   47783 type.go:168] "Request Body" body=""
	I1213 08:51:13.774103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:13.774412   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:14.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:51:14.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:14.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:14.773021   47783 type.go:168] "Request Body" body=""
	I1213 08:51:14.773090   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:14.773394   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:14.773446   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:15.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:51:15.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:15.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:15.773082   47783 type.go:168] "Request Body" body=""
	I1213 08:51:15.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:15.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:16.273178   47783 type.go:168] "Request Body" body=""
	I1213 08:51:16.273248   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:16.273492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:16.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:16.773179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:16.773512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:16.773564   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:17.273224   47783 type.go:168] "Request Body" body=""
	I1213 08:51:17.273307   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:17.273601   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:17.773254   47783 type.go:168] "Request Body" body=""
	I1213 08:51:17.773319   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:17.773610   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:18.273315   47783 type.go:168] "Request Body" body=""
	I1213 08:51:18.273399   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:18.273714   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:18.773100   47783 type.go:168] "Request Body" body=""
	I1213 08:51:18.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:18.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:18.773613   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:19.273057   47783 type.go:168] "Request Body" body=""
	I1213 08:51:19.273134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:19.273422   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:19.773326   47783 type.go:168] "Request Body" body=""
	I1213 08:51:19.773408   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:19.773817   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:20.273652   47783 type.go:168] "Request Body" body=""
	I1213 08:51:20.273726   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:20.274023   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:20.773908   47783 type.go:168] "Request Body" body=""
	I1213 08:51:20.773981   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:20.774242   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:20.774291   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:21.272987   47783 type.go:168] "Request Body" body=""
	I1213 08:51:21.273071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:21.273418   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:21.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:51:21.773224   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:21.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:22.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:51:22.273110   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:22.273385   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:22.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:22.773167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:22.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:23.273121   47783 type.go:168] "Request Body" body=""
	I1213 08:51:23.273201   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:23.273516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:23.273571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:23.773217   47783 type.go:168] "Request Body" body=""
	I1213 08:51:23.773286   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:23.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:24.273100   47783 type.go:168] "Request Body" body=""
	I1213 08:51:24.273174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:24.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:24.773255   47783 type.go:168] "Request Body" body=""
	I1213 08:51:24.773333   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:24.773667   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:25.273326   47783 type.go:168] "Request Body" body=""
	I1213 08:51:25.273394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:25.273641   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:25.273680   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:25.773681   47783 type.go:168] "Request Body" body=""
	I1213 08:51:25.773757   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:25.774066   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:26.273874   47783 type.go:168] "Request Body" body=""
	I1213 08:51:26.273947   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:26.274273   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:26.772979   47783 type.go:168] "Request Body" body=""
	I1213 08:51:26.773047   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:26.773307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:27.273109   47783 type.go:168] "Request Body" body=""
	I1213 08:51:27.273206   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:27.273597   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:27.773321   47783 type.go:168] "Request Body" body=""
	I1213 08:51:27.773394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:27.773719   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:27.773778   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:28.273019   47783 type.go:168] "Request Body" body=""
	I1213 08:51:28.273092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:28.273421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:28.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:51:28.773172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:28.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:29.273187   47783 type.go:168] "Request Body" body=""
	I1213 08:51:29.273261   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:29.273583   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:29.773457   47783 type.go:168] "Request Body" body=""
	I1213 08:51:29.773543   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:29.773812   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:29.773863   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:30.273575   47783 type.go:168] "Request Body" body=""
	I1213 08:51:30.273663   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:30.273957   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:30.773908   47783 type.go:168] "Request Body" body=""
	I1213 08:51:30.773991   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:30.774329   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:31.273977   47783 type.go:168] "Request Body" body=""
	I1213 08:51:31.274058   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:31.274309   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:31.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:51:31.773126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:31.773428   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:32.273147   47783 type.go:168] "Request Body" body=""
	I1213 08:51:32.273218   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:32.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:32.273546   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:32.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:51:32.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:32.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:33.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:51:33.273171   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:33.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:33.773083   47783 type.go:168] "Request Body" body=""
	I1213 08:51:33.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:33.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:34.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:51:34.273124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:34.273384   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:34.773086   47783 type.go:168] "Request Body" body=""
	I1213 08:51:34.773216   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:34.773545   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:34.773602   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:35.273264   47783 type.go:168] "Request Body" body=""
	I1213 08:51:35.273339   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:35.273673   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:35.773675   47783 type.go:168] "Request Body" body=""
	I1213 08:51:35.773742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:35.773995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:36.273812   47783 type.go:168] "Request Body" body=""
	I1213 08:51:36.273886   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:36.274208   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:36.773984   47783 type.go:168] "Request Body" body=""
	I1213 08:51:36.774056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:36.774351   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:36.774398   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:37.273032   47783 type.go:168] "Request Body" body=""
	I1213 08:51:37.273100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:37.273364   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:37.773071   47783 type.go:168] "Request Body" body=""
	I1213 08:51:37.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:37.773499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:38.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:51:38.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:38.273480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:38.773071   47783 type.go:168] "Request Body" body=""
	I1213 08:51:38.773150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:38.773445   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:39.273066   47783 type.go:168] "Request Body" body=""
	I1213 08:51:39.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:39.273476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:39.273529   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:39.773199   47783 type.go:168] "Request Body" body=""
	I1213 08:51:39.773278   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:39.773580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:40.273030   47783 type.go:168] "Request Body" body=""
	I1213 08:51:40.273100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:40.273392   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:40.773204   47783 type.go:168] "Request Body" body=""
	I1213 08:51:40.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:40.773647   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:41.273380   47783 type.go:168] "Request Body" body=""
	I1213 08:51:41.273453   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:41.273801   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:41.273854   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:41.773592   47783 type.go:168] "Request Body" body=""
	I1213 08:51:41.773662   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:41.773929   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:42.273710   47783 type.go:168] "Request Body" body=""
	I1213 08:51:42.273789   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:42.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:42.773753   47783 type.go:168] "Request Body" body=""
	I1213 08:51:42.773827   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:42.774140   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:43.273914   47783 type.go:168] "Request Body" body=""
	I1213 08:51:43.273992   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:43.274262   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:43.274314   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:43.774012   47783 type.go:168] "Request Body" body=""
	I1213 08:51:43.774089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:43.774435   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:44.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:51:44.273110   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:44.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:44.773038   47783 type.go:168] "Request Body" body=""
	I1213 08:51:44.773145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:44.773485   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:45.273088   47783 type.go:168] "Request Body" body=""
	I1213 08:51:45.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:45.273517   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:45.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:51:45.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:45.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:45.773548   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:46.273137   47783 type.go:168] "Request Body" body=""
	I1213 08:51:46.273200   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:46.273447   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:46.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:51:46.773198   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:46.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:47.273202   47783 type.go:168] "Request Body" body=""
	I1213 08:51:47.273291   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:47.273588   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:47.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:51:47.773134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:47.773395   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:48.273117   47783 type.go:168] "Request Body" body=""
	I1213 08:51:48.273197   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:48.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:48.273569   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:48.773253   47783 type.go:168] "Request Body" body=""
	I1213 08:51:48.773339   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:48.773688   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:49.273385   47783 type.go:168] "Request Body" body=""
	I1213 08:51:49.273462   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:49.273726   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:49.773610   47783 type.go:168] "Request Body" body=""
	I1213 08:51:49.773679   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:49.773995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:50.273790   47783 type.go:168] "Request Body" body=""
	I1213 08:51:50.273866   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:50.274187   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:50.274243   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:50.773061   47783 type.go:168] "Request Body" body=""
	I1213 08:51:50.773136   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:50.773466   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:51.273093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:51.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:51.273471   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:51.773060   47783 type.go:168] "Request Body" body=""
	I1213 08:51:51.773138   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:51.773445   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:52.273028   47783 type.go:168] "Request Body" body=""
	I1213 08:51:52.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:52.273335   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:52.773020   47783 type.go:168] "Request Body" body=""
	I1213 08:51:52.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:52.773428   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:52.773493   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:53.273197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:53.273277   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:53.273626   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:53.773306   47783 type.go:168] "Request Body" body=""
	I1213 08:51:53.773373   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:53.773624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:54.273103   47783 type.go:168] "Request Body" body=""
	I1213 08:51:54.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:54.273587   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:54.773074   47783 type.go:168] "Request Body" body=""
	I1213 08:51:54.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:54.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:54.773556   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:55.273197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:55.273270   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:55.273520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:55.773075   47783 type.go:168] "Request Body" body=""
	I1213 08:51:55.773149   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:55.773705   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:56.273383   47783 type.go:168] "Request Body" body=""
	I1213 08:51:56.273474   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:56.274426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:56.772963   47783 type.go:168] "Request Body" body=""
	I1213 08:51:56.773031   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:56.773288   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:57.272979   47783 type.go:168] "Request Body" body=""
	I1213 08:51:57.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:57.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:57.273481   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:57.773116   47783 type.go:168] "Request Body" body=""
	I1213 08:51:57.773193   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:57.773526   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:58.273042   47783 type.go:168] "Request Body" body=""
	I1213 08:51:58.273108   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:58.273456   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:58.773114   47783 type.go:168] "Request Body" body=""
	I1213 08:51:58.773188   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:58.773503   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:59.273098   47783 type.go:168] "Request Body" body=""
	I1213 08:51:59.273220   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:59.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:59.273585   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:59.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:51:59.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:59.773540   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:00.273530   47783 type.go:168] "Request Body" body=""
	I1213 08:52:00.273627   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:00.274021   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:00.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:52:00.773167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:00.773522   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:01.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:52:01.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:01.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:01.773057   47783 type.go:168] "Request Body" body=""
	I1213 08:52:01.773134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:01.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:01.773527   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:02.273229   47783 type.go:168] "Request Body" body=""
	I1213 08:52:02.273310   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:02.273637   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:02.773212   47783 type.go:168] "Request Body" body=""
	I1213 08:52:02.773280   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:02.773548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:03.273115   47783 type.go:168] "Request Body" body=""
	I1213 08:52:03.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:03.273565   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:03.773267   47783 type.go:168] "Request Body" body=""
	I1213 08:52:03.773352   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:03.773695   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:03.773772   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:04.273021   47783 type.go:168] "Request Body" body=""
	I1213 08:52:04.273090   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:04.273426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:04.773087   47783 type.go:168] "Request Body" body=""
	I1213 08:52:04.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:04.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:05.273162   47783 type.go:168] "Request Body" body=""
	I1213 08:52:05.273242   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:05.273562   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:05.773515   47783 type.go:168] "Request Body" body=""
	I1213 08:52:05.773585   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:05.773843   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:05.773884   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:06.273680   47783 type.go:168] "Request Body" body=""
	I1213 08:52:06.273755   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:06.274063   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:06.773854   47783 type.go:168] "Request Body" body=""
	I1213 08:52:06.773929   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:06.774259   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:07.274012   47783 type.go:168] "Request Body" body=""
	I1213 08:52:07.274080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:07.274341   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:07.773027   47783 type.go:168] "Request Body" body=""
	I1213 08:52:07.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:07.773425   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:08.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:52:08.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:08.273531   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:08.273588   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:08.773079   47783 type.go:168] "Request Body" body=""
	I1213 08:52:08.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:08.773463   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:09.273105   47783 type.go:168] "Request Body" body=""
	I1213 08:52:09.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:09.273541   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:09.773271   47783 type.go:168] "Request Body" body=""
	I1213 08:52:09.773343   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:09.773683   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:10.273235   47783 type.go:168] "Request Body" body=""
	I1213 08:52:10.273308   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:10.273623   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:10.273686   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:10.773580   47783 type.go:168] "Request Body" body=""
	I1213 08:52:10.773661   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:10.773990   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:11.273663   47783 type.go:168] "Request Body" body=""
	I1213 08:52:11.273742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:11.274065   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:11.773833   47783 type.go:168] "Request Body" body=""
	I1213 08:52:11.773902   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:11.774166   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:12.273942   47783 type.go:168] "Request Body" body=""
	I1213 08:52:12.274016   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:12.274348   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:12.274403   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:12.774024   47783 type.go:168] "Request Body" body=""
	I1213 08:52:12.774098   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:12.774431   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:13.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:13.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:13.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:13.773196   47783 type.go:168] "Request Body" body=""
	I1213 08:52:13.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:13.773598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:14.273324   47783 type.go:168] "Request Body" body=""
	I1213 08:52:14.273404   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:14.273738   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:14.773419   47783 type.go:168] "Request Body" body=""
	I1213 08:52:14.773491   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:14.773809   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:14.773870   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:15.273604   47783 type.go:168] "Request Body" body=""
	I1213 08:52:15.273678   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:15.273970   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:15.773929   47783 type.go:168] "Request Body" body=""
	I1213 08:52:15.774005   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:15.774334   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:16.273966   47783 type.go:168] "Request Body" body=""
	I1213 08:52:16.274045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:16.274328   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:16.773036   47783 type.go:168] "Request Body" body=""
	I1213 08:52:16.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:16.773432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:17.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:52:17.273192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:17.273578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:17.273639   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:17.772950   47783 type.go:168] "Request Body" body=""
	I1213 08:52:17.773025   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:17.773278   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:18.274027   47783 type.go:168] "Request Body" body=""
	I1213 08:52:18.274101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:18.274430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:18.772996   47783 type.go:168] "Request Body" body=""
	I1213 08:52:18.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:18.773402   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:19.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:52:19.273152   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:19.273456   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:19.773356   47783 type.go:168] "Request Body" body=""
	I1213 08:52:19.773432   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:19.773764   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:19.773818   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:20.273485   47783 type.go:168] "Request Body" body=""
	I1213 08:52:20.273567   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:20.273890   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:20.773865   47783 type.go:168] "Request Body" body=""
	I1213 08:52:20.773932   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:20.774231   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:21.273999   47783 type.go:168] "Request Body" body=""
	I1213 08:52:21.274069   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:21.274395   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:21.772998   47783 type.go:168] "Request Body" body=""
	I1213 08:52:21.773076   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:21.773386   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:22.273028   47783 type.go:168] "Request Body" body=""
	I1213 08:52:22.273101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:22.273413   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:22.273462   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:22.773146   47783 type.go:168] "Request Body" body=""
	I1213 08:52:22.773221   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:22.773559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:23.273239   47783 type.go:168] "Request Body" body=""
	I1213 08:52:23.273317   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:23.273630   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:23.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:52:23.773089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:23.773346   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:24.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:52:24.273175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:24.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:24.273551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:24.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:52:24.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:24.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:25.273040   47783 type.go:168] "Request Body" body=""
	I1213 08:52:25.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:25.273374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:25.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:52:25.773304   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:25.773625   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:26.273113   47783 type.go:168] "Request Body" body=""
	I1213 08:52:26.273187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:26.273523   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:26.273583   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:26.773246   47783 type.go:168] "Request Body" body=""
	I1213 08:52:26.773317   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:26.773577   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:27.273252   47783 type.go:168] "Request Body" body=""
	I1213 08:52:27.273326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:27.273645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:27.773350   47783 type.go:168] "Request Body" body=""
	I1213 08:52:27.773425   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:27.773742   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:28.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:52:28.273167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:28.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:28.773080   47783 type.go:168] "Request Body" body=""
	I1213 08:52:28.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:28.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:28.773541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:29.273204   47783 type.go:168] "Request Body" body=""
	I1213 08:52:29.273284   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:29.273624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:29.773318   47783 type.go:168] "Request Body" body=""
	I1213 08:52:29.773387   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:29.773650   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:30.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:30.273174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:30.273487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:30.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:52:30.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:30.773484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:31.273046   47783 type.go:168] "Request Body" body=""
	I1213 08:52:31.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:31.273373   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:31.273414   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:31.773116   47783 type.go:168] "Request Body" body=""
	I1213 08:52:31.773190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:31.773520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:32.273103   47783 type.go:168] "Request Body" body=""
	I1213 08:52:32.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:32.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:32.773017   47783 type.go:168] "Request Body" body=""
	I1213 08:52:32.773087   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:32.773337   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:33.273004   47783 type.go:168] "Request Body" body=""
	I1213 08:52:33.273082   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:33.273417   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:33.273472   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:33.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:52:33.773088   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:33.773405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:34.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:52:34.273120   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:34.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:34.773152   47783 type.go:168] "Request Body" body=""
	I1213 08:52:34.773227   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:34.773558   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:35.273270   47783 type.go:168] "Request Body" body=""
	I1213 08:52:35.273350   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:35.273680   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:35.273739   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:35.773621   47783 type.go:168] "Request Body" body=""
	I1213 08:52:35.773688   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:35.773944   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:36.273767   47783 type.go:168] "Request Body" body=""
	I1213 08:52:36.273909   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:36.274226   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:36.773955   47783 type.go:168] "Request Body" body=""
	I1213 08:52:36.774030   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:36.774359   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:37.273017   47783 type.go:168] "Request Body" body=""
	I1213 08:52:37.273085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:37.273361   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:37.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:52:37.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:37.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:37.773513   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:38.273181   47783 type.go:168] "Request Body" body=""
	I1213 08:52:38.273262   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:38.273612   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:38.773291   47783 type.go:168] "Request Body" body=""
	I1213 08:52:38.773360   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:38.773665   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:39.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:52:39.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:39.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:39.773368   47783 type.go:168] "Request Body" body=""
	I1213 08:52:39.773449   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:39.773774   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:39.773830   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:40.273560   47783 type.go:168] "Request Body" body=""
	I1213 08:52:40.273630   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:40.273887   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:40.773805   47783 type.go:168] "Request Body" body=""
	I1213 08:52:40.773877   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:40.774208   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:41.273838   47783 type.go:168] "Request Body" body=""
	I1213 08:52:41.273922   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:41.274250   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:41.774001   47783 type.go:168] "Request Body" body=""
	I1213 08:52:41.774079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:41.774332   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:41.774381   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:42.273093   47783 type.go:168] "Request Body" body=""
	I1213 08:52:42.273177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:42.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:42.773228   47783 type.go:168] "Request Body" body=""
	I1213 08:52:42.773303   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:42.773598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:43.273031   47783 type.go:168] "Request Body" body=""
	I1213 08:52:43.273104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:43.273352   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:43.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:52:43.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:43.773424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:44.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:44.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:44.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:44.273560   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:44.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:52:44.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:44.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:45.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:52:45.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:45.273519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:45.773133   47783 type.go:168] "Request Body" body=""
	I1213 08:52:45.773210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:45.773516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:46.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:52:46.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:46.273568   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:46.273635   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:46.773088   47783 type.go:168] "Request Body" body=""
	I1213 08:52:46.773186   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:46.773520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:47.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:52:47.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:47.273491   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:47.774025   47783 type.go:168] "Request Body" body=""
	I1213 08:52:47.774092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:47.774383   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:48.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:52:48.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:48.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:48.773232   47783 type.go:168] "Request Body" body=""
	I1213 08:52:48.773322   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:48.773644   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:48.773698   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:49.273023   47783 type.go:168] "Request Body" body=""
	I1213 08:52:49.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:49.273414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:49.773139   47783 type.go:168] "Request Body" body=""
	I1213 08:52:49.773208   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:49.773528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:50.273274   47783 type.go:168] "Request Body" body=""
	I1213 08:52:50.273347   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:50.273702   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:50.773600   47783 type.go:168] "Request Body" body=""
	I1213 08:52:50.773672   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:50.773927   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:50.773976   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:51.273754   47783 type.go:168] "Request Body" body=""
	I1213 08:52:51.273831   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:51.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:51.773932   47783 type.go:168] "Request Body" body=""
	I1213 08:52:51.774002   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:51.774339   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:52.273970   47783 type.go:168] "Request Body" body=""
	I1213 08:52:52.274046   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:52.274330   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:52.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:52:52.773127   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:52.773475   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:53.273012   47783 type.go:168] "Request Body" body=""
	I1213 08:52:53.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:53.273405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:53.273464   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:53.773963   47783 type.go:168] "Request Body" body=""
	I1213 08:52:53.774030   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:53.774299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:54.272999   47783 type.go:168] "Request Body" body=""
	I1213 08:52:54.273074   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:54.273401   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:54.773125   47783 type.go:168] "Request Body" body=""
	I1213 08:52:54.773203   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:54.773544   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:55.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:52:55.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:55.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:55.773100   47783 type.go:168] "Request Body" body=""
	I1213 08:52:55.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:55.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:55.773542   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:56.273181   47783 type.go:168] "Request Body" body=""
	I1213 08:52:56.273253   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:56.273578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:56.773148   47783 type.go:168] "Request Body" body=""
	I1213 08:52:56.773218   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:56.773518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:57.273086   47783 type.go:168] "Request Body" body=""
	I1213 08:52:57.273163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:57.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:57.773089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:57.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:57.773480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:58.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:52:58.273119   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:58.273451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:58.273511   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:58.773075   47783 type.go:168] "Request Body" body=""
	I1213 08:52:58.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:58.773467   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:59.273192   47783 type.go:168] "Request Body" body=""
	I1213 08:52:59.273273   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:59.273605   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:59.773326   47783 type.go:168] "Request Body" body=""
	I1213 08:52:59.773396   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:59.773672   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:00.273172   47783 type.go:168] "Request Body" body=""
	I1213 08:53:00.273263   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:00.273742   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:00.273809   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:00.773616   47783 type.go:168] "Request Body" body=""
	I1213 08:53:00.773702   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:00.774032   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:01.273744   47783 type.go:168] "Request Body" body=""
	I1213 08:53:01.273814   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:01.274061   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:01.773849   47783 type.go:168] "Request Body" body=""
	I1213 08:53:01.773919   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:01.774245   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:02.272981   47783 type.go:168] "Request Body" body=""
	I1213 08:53:02.273054   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:02.273367   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:02.773046   47783 type.go:168] "Request Body" body=""
	I1213 08:53:02.773118   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:02.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:02.773469   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:03.273079   47783 type.go:168] "Request Body" body=""
	I1213 08:53:03.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:03.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:03.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:53:03.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:03.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:04.273172   47783 type.go:168] "Request Body" body=""
	I1213 08:53:04.273245   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:04.273563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:04.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:04.773147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:04.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:04.773531   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:05.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:53:05.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:05.273471   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:05.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:53:05.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:05.773436   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:06.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:53:06.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:06.273509   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:06.773214   47783 type.go:168] "Request Body" body=""
	I1213 08:53:06.773296   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:06.773613   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:06.773678   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:07.273027   47783 type.go:168] "Request Body" body=""
	I1213 08:53:07.273103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:07.273397   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:07.773074   47783 type.go:168] "Request Body" body=""
	I1213 08:53:07.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:07.773484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:08.273184   47783 type.go:168] "Request Body" body=""
	I1213 08:53:08.273274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:08.273603   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:08.773033   47783 type.go:168] "Request Body" body=""
	I1213 08:53:08.773101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:08.773392   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:09.273719   47783 type.go:168] "Request Body" body=""
	I1213 08:53:09.273791   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:09.274102   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:09.274155   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:09.773020   47783 type.go:168] "Request Body" body=""
	I1213 08:53:09.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:09.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:10.273187   47783 type.go:168] "Request Body" body=""
	I1213 08:53:10.273265   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:10.273558   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:10.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:53:10.773290   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:10.773639   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:11.273348   47783 type.go:168] "Request Body" body=""
	I1213 08:53:11.273428   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:11.273762   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:11.773332   47783 type.go:168] "Request Body" body=""
	I1213 08:53:11.773399   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:11.773706   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:11.773757   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:12.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:53:12.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:12.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:12.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:12.773181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:12.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:13.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:13.273148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:13.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:13.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:53:13.773154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:13.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:14.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:53:14.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:14.273486   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:14.273541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:14.773070   47783 type.go:168] "Request Body" body=""
	I1213 08:53:14.773144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:14.773470   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:15.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:53:15.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:15.273487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:15.773087   47783 type.go:168] "Request Body" body=""
	I1213 08:53:15.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:15.773478   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:16.272941   47783 type.go:168] "Request Body" body=""
	I1213 08:53:16.273007   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:16.273251   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:16.774037   47783 type.go:168] "Request Body" body=""
	I1213 08:53:16.774124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:16.774507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:16.774560   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:17.273219   47783 type.go:168] "Request Body" body=""
	I1213 08:53:17.273292   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:17.273621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:17.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:53:17.773095   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:17.773354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:18.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:18.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:18.273472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:18.773188   47783 type.go:168] "Request Body" body=""
	I1213 08:53:18.773268   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:18.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:19.272964   47783 type.go:168] "Request Body" body=""
	I1213 08:53:19.273032   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:19.273352   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:19.273401   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:19.773060   47783 type.go:168] "Request Body" body=""
	I1213 08:53:19.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:19.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:20.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:53:20.273152   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:20.273506   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:20.773068   47783 type.go:168] "Request Body" body=""
	I1213 08:53:20.773137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:20.773437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:21.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:53:21.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:21.273479   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:21.273526   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:21.773202   47783 type.go:168] "Request Body" body=""
	I1213 08:53:21.773283   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:21.773621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:22.273008   47783 type.go:168] "Request Body" body=""
	I1213 08:53:22.273079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:22.273390   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:22.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:53:22.773136   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:22.773465   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:23.273163   47783 type.go:168] "Request Body" body=""
	I1213 08:53:23.273237   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:23.273573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:23.273630   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:23.772972   47783 type.go:168] "Request Body" body=""
	I1213 08:53:23.773041   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:23.773344   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:24.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:53:24.273160   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:24.273485   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:24.773194   47783 type.go:168] "Request Body" body=""
	I1213 08:53:24.773266   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:24.773573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:25.273008   47783 type.go:168] "Request Body" body=""
	I1213 08:53:25.273080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:25.273326   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:25.773135   47783 type.go:168] "Request Body" body=""
	I1213 08:53:25.773214   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:25.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:25.773610   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:26.273236   47783 type.go:168] "Request Body" body=""
	I1213 08:53:26.273334   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:26.273645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:26.773015   47783 type.go:168] "Request Body" body=""
	I1213 08:53:26.773082   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:26.773414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:27.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:27.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:27.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:27.773212   47783 type.go:168] "Request Body" body=""
	I1213 08:53:27.773288   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:27.773611   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:27.773662   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:28.273082   47783 type.go:168] "Request Body" body=""
	I1213 08:53:28.273158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:28.273448   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:28.773085   47783 type.go:168] "Request Body" body=""
	I1213 08:53:28.773154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:28.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:29.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:53:29.273196   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:29.273483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:29.773239   47783 type.go:168] "Request Body" body=""
	I1213 08:53:29.773318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:29.773573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:30.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:53:30.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:30.273542   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:30.273593   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:30.773082   47783 type.go:168] "Request Body" body=""
	I1213 08:53:30.773174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:30.773474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:31.273124   47783 type.go:168] "Request Body" body=""
	I1213 08:53:31.273195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:31.273460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:31.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:53:31.773163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:31.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:32.273159   47783 type.go:168] "Request Body" body=""
	I1213 08:53:32.273240   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:32.273577   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:32.273635   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:32.772992   47783 type.go:168] "Request Body" body=""
	I1213 08:53:32.773056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:32.773301   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:33.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:53:33.273144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:33.273448   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:33.773096   47783 type.go:168] "Request Body" body=""
	I1213 08:53:33.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:33.773514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:34.273042   47783 type.go:168] "Request Body" body=""
	I1213 08:53:34.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:34.273432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:34.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:53:34.773216   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:34.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:34.773545   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:35.273073   47783 type.go:168] "Request Body" body=""
	I1213 08:53:35.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:35.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:35.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:53:35.773105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:35.773404   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:36.273023   47783 type.go:168] "Request Body" body=""
	I1213 08:53:36.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:36.273475   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:36.773061   47783 type.go:168] "Request Body" body=""
	I1213 08:53:36.773145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:36.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:37.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:37.273135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:37.273382   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:37.273421   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:37.773097   47783 type.go:168] "Request Body" body=""
	I1213 08:53:37.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:37.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:38.273189   47783 type.go:168] "Request Body" body=""
	I1213 08:53:38.273267   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:38.273579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:38.773028   47783 type.go:168] "Request Body" body=""
	I1213 08:53:38.773102   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:38.773347   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:39.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:53:39.273143   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:39.273470   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:39.273519   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:39.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:53:39.773288   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:39.773599   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:40.273048   47783 type.go:168] "Request Body" body=""
	I1213 08:53:40.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:40.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:40.773077   47783 type.go:168] "Request Body" body=""
	I1213 08:53:40.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:40.773508   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:41.273216   47783 type.go:168] "Request Body" body=""
	I1213 08:53:41.273298   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:41.273616   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:41.273672   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:41.772986   47783 type.go:168] "Request Body" body=""
	I1213 08:53:41.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:41.773356   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:42.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:42.273143   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:42.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:42.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:42.773184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:42.773518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:43.273053   47783 type.go:168] "Request Body" body=""
	I1213 08:53:43.273132   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:43.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:43.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:43.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:43.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:43.773551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:44.273243   47783 type.go:168] "Request Body" body=""
	I1213 08:53:44.273326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:44.273652   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:44.773050   47783 type.go:168] "Request Body" body=""
	I1213 08:53:44.773126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:44.773461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:45.273208   47783 type.go:168] "Request Body" body=""
	I1213 08:53:45.273289   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:45.273720   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:45.773581   47783 type.go:168] "Request Body" body=""
	I1213 08:53:45.773651   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:45.773963   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:45.774017   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:46.275629   47783 type.go:168] "Request Body" body=""
	I1213 08:53:46.275703   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:46.275961   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:46.773754   47783 type.go:168] "Request Body" body=""
	I1213 08:53:46.773827   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:46.774161   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:47.273968   47783 type.go:168] "Request Body" body=""
	I1213 08:53:47.274043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:47.274351   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:47.773011   47783 type.go:168] "Request Body" body=""
	I1213 08:53:47.773096   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:47.773357   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:48.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:53:48.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:48.273530   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:48.273587   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:48.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:48.773183   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:48.773523   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:49.273149   47783 type.go:168] "Request Body" body=""
	I1213 08:53:49.273234   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:49.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:49.773403   47783 type.go:168] "Request Body" body=""
	I1213 08:53:49.773482   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:49.773819   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:50.273603   47783 type.go:168] "Request Body" body=""
	I1213 08:53:50.273680   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:50.273953   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:50.273999   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:50.773802   47783 type.go:168] "Request Body" body=""
	I1213 08:53:50.773880   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:50.774158   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:51.273956   47783 type.go:168] "Request Body" body=""
	I1213 08:53:51.274028   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:51.274317   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:51.772988   47783 type.go:168] "Request Body" body=""
	I1213 08:53:51.773063   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:51.773397   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:52.273098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:52.273170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:52.273477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:52.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:53:52.773187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:52.773515   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:52.773572   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:53.273241   47783 type.go:168] "Request Body" body=""
	I1213 08:53:53.273318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:53.273661   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:53.773054   47783 type.go:168] "Request Body" body=""
	I1213 08:53:53.773123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:53.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:54.273076   47783 type.go:168] "Request Body" body=""
	I1213 08:53:54.273156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:54.273507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:54.773088   47783 type.go:168] "Request Body" body=""
	I1213 08:53:54.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:54.773454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:55.273120   47783 type.go:168] "Request Body" body=""
	I1213 08:53:55.273215   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:55.273569   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:55.273618   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:55.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:55.773179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:55.773491   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:56.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:53:56.273170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:56.273651   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:56.773063   47783 type.go:168] "Request Body" body=""
	I1213 08:53:56.773135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:56.773385   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:57.273085   47783 type.go:168] "Request Body" body=""
	I1213 08:53:57.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:57.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:57.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:53:57.773297   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:57.773605   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:57.773654   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:58.273317   47783 type.go:168] "Request Body" body=""
	I1213 08:53:58.273402   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:58.273675   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:58.773360   47783 type.go:168] "Request Body" body=""
	I1213 08:53:58.773432   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:58.773734   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:59.273441   47783 type.go:168] "Request Body" body=""
	I1213 08:53:59.273519   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:59.273831   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:59.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:53:59.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:59.773701   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:59.773764   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:54:00.273228   47783 type.go:168] "Request Body" body=""
	I1213 08:54:00.273367   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:54:00.273821   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:54:00.773873   47783 type.go:168] "Request Body" body=""
	I1213 08:54:00.773932   47783 node_ready.go:38] duration metric: took 6m0.00107019s for node "functional-074420" to be "Ready" ...
	I1213 08:54:00.777004   47783 out.go:203] 
	W1213 08:54:00.779921   47783 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 08:54:00.779957   47783 out.go:285] * 
	W1213 08:54:00.782360   47783 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:54:00.785205   47783 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978483314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978503105Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978537772Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978553321Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978574055Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978590318Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978599738Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978615877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978633297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978662992Z" level=info msg="Connect containerd service"
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.978978999Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 08:47:57 functional-074420 containerd[5215]: time="2025-12-13T08:47:57.979550081Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.000071290Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.000154589Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.000504016Z" level=info msg="Start subscribing containerd event"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.000574737Z" level=info msg="Start recovering state"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.043623992Z" level=info msg="Start event monitor"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.043840191Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.043918469Z" level=info msg="Start streaming server"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.043996361Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.044117560Z" level=info msg="runtime interface starting up..."
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.044176883Z" level=info msg="starting plugins..."
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.044246234Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 08:47:58 functional-074420 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 08:47:58 functional-074420 containerd[5215]: time="2025-12-13T08:47:58.046961351Z" level=info msg="containerd successfully booted in 0.089510s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:54:05.316298    8582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:05.317087    8582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:05.318718    8582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:05.319361    8582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:05.321153    8582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 08:54:05 up 36 min,  0 user,  load average: 0.27, 0.30, 0.49
	Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 08:54:02 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:02 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 13 08:54:02 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:02 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:02 functional-074420 kubelet[8424]: E1213 08:54:02.830782    8424 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:02 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:02 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:03 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 13 08:54:03 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:03 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:03 functional-074420 kubelet[8457]: E1213 08:54:03.610501    8457 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:03 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:03 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:04 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 13 08:54:04 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:04 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:04 functional-074420 kubelet[8478]: E1213 08:54:04.328554    8478 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:04 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:04 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:05 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 13 08:54:05 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:05 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:05 functional-074420 kubelet[8515]: E1213 08:54:05.086429    8515 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:05 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:05 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 2 (354.97896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 kubectl -- --context functional-074420 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 kubectl -- --context functional-074420 get pods: exit status 1 (112.037389ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-074420 kubectl -- --context functional-074420 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	        "Created": "2025-12-13T08:39:40.050933605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:39:40.114566965Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
	        "Name": "/functional-074420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	                "LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-074420",
	                "Source": "/var/lib/docker/volumes/functional-074420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074420",
	                "name.minikube.sigs.k8s.io": "functional-074420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
	            "SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e0:c5:f8:aa:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
	                    "EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074420",
	                        "662fb3d52b3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 2 (324.070125ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-049633 ssh findmnt -T /mount3                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image build -t localhost/my-image:functional-049633 testdata/build --alsologtostderr                                                  │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ mount   │ -p functional-049633 --kill=true                                                                                                                        │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ image   │ functional-049633 image ls --format yaml --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image ls --format json --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image ls --format table --alsologtostderr                                                                                             │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image ls                                                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ delete  │ -p functional-049633                                                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ start   │ -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ start   │ -p functional-074420 --alsologtostderr -v=8                                                                                                             │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:47 UTC │                     │
	│ cache   │ functional-074420 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache add registry.k8s.io/pause:latest                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache add minikube-local-cache-test:functional-074420                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache delete minikube-local-cache-test:functional-074420                                                                              │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl images                                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │                     │
	│ cache   │ functional-074420 cache reload                                                                                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ kubectl │ functional-074420 kubectl -- --context functional-074420 get pods                                                                                       │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:47:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:47:55.372522   47783 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:47:55.372733   47783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:47:55.372759   47783 out.go:374] Setting ErrFile to fd 2...
	I1213 08:47:55.372779   47783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:47:55.373071   47783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:47:55.373500   47783 out.go:368] Setting JSON to false
	I1213 08:47:55.374339   47783 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1828,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:47:55.374435   47783 start.go:143] virtualization:  
	I1213 08:47:55.378014   47783 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:47:55.381059   47783 notify.go:221] Checking for updates...
	I1213 08:47:55.381456   47783 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:47:55.384645   47783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:47:55.387475   47783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:55.390285   47783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:47:55.393179   47783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 08:47:55.396170   47783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:47:55.399625   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:55.399723   47783 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:47:55.421152   47783 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:47:55.421278   47783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:47:55.479286   47783 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 08:47:55.469949512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:47:55.479410   47783 docker.go:319] overlay module found
	I1213 08:47:55.482469   47783 out.go:179] * Using the docker driver based on existing profile
	I1213 08:47:55.485237   47783 start.go:309] selected driver: docker
	I1213 08:47:55.485259   47783 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:55.485359   47783 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:47:55.485469   47783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:47:55.552137   47783 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 08:47:55.542465837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:47:55.552549   47783 cni.go:84] Creating CNI manager for ""
	I1213 08:47:55.552614   47783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:47:55.552664   47783 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:55.555904   47783 out.go:179] * Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	I1213 08:47:55.558801   47783 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 08:47:55.561846   47783 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:47:55.564866   47783 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:47:55.564922   47783 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 08:47:55.564938   47783 cache.go:65] Caching tarball of preloaded images
	I1213 08:47:55.564963   47783 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:47:55.565027   47783 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 08:47:55.565039   47783 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 08:47:55.565188   47783 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
	I1213 08:47:55.585020   47783 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:47:55.585044   47783 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:47:55.585064   47783 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:47:55.585094   47783 start.go:360] acquireMachinesLock for functional-074420: {Name:mk9a8356bf81e58530d2c2996b4da0b7487171c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:47:55.585169   47783 start.go:364] duration metric: took 45.161µs to acquireMachinesLock for "functional-074420"
	I1213 08:47:55.585195   47783 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:47:55.585204   47783 fix.go:54] fixHost starting: 
	I1213 08:47:55.585456   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:55.601925   47783 fix.go:112] recreateIfNeeded on functional-074420: state=Running err=<nil>
	W1213 08:47:55.601956   47783 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:47:55.605110   47783 out.go:252] * Updating the running docker "functional-074420" container ...
	I1213 08:47:55.605143   47783 machine.go:94] provisionDockerMachine start ...
	I1213 08:47:55.605228   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.622184   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.622521   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.622536   47783 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:47:55.770899   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:47:55.770923   47783 ubuntu.go:182] provisioning hostname "functional-074420"
	I1213 08:47:55.770990   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.788917   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.789224   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.789243   47783 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-074420 && echo "functional-074420" | sudo tee /etc/hostname
	I1213 08:47:55.944141   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:47:55.944216   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.963276   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.963669   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.963693   47783 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-074420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-074420/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-074420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:47:56.123813   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:47:56.123839   47783 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 08:47:56.123865   47783 ubuntu.go:190] setting up certificates
	I1213 08:47:56.123875   47783 provision.go:84] configureAuth start
	I1213 08:47:56.123935   47783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:47:56.141934   47783 provision.go:143] copyHostCerts
	I1213 08:47:56.141983   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:47:56.142030   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 08:47:56.142044   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:47:56.142121   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 08:47:56.142216   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:47:56.142238   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 08:47:56.142247   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:47:56.142276   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 08:47:56.142329   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:47:56.142361   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 08:47:56.142370   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:47:56.142397   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 08:47:56.142457   47783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.functional-074420 san=[127.0.0.1 192.168.49.2 functional-074420 localhost minikube]
	I1213 08:47:56.320875   47783 provision.go:177] copyRemoteCerts
	I1213 08:47:56.320949   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:47:56.320994   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.338054   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.442993   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 08:47:56.443052   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:47:56.459467   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 08:47:56.459650   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:47:56.476836   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 08:47:56.476894   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 08:47:56.494408   47783 provision.go:87] duration metric: took 370.509157ms to configureAuth
	I1213 08:47:56.494435   47783 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:47:56.494611   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:56.494624   47783 machine.go:97] duration metric: took 889.474725ms to provisionDockerMachine
	I1213 08:47:56.494633   47783 start.go:293] postStartSetup for "functional-074420" (driver="docker")
	I1213 08:47:56.494644   47783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:47:56.494700   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:47:56.494748   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.511710   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.615158   47783 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:47:56.618357   47783 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 08:47:56.618378   47783 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 08:47:56.618383   47783 command_runner.go:130] > VERSION_ID="12"
	I1213 08:47:56.618388   47783 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 08:47:56.618392   47783 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 08:47:56.618422   47783 command_runner.go:130] > ID=debian
	I1213 08:47:56.618436   47783 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 08:47:56.618441   47783 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 08:47:56.618448   47783 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 08:47:56.618517   47783 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:47:56.618537   47783 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:47:56.618550   47783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 08:47:56.618607   47783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 08:47:56.618691   47783 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 08:47:56.618702   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> /etc/ssl/certs/41202.pem
	I1213 08:47:56.618783   47783 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1213 08:47:56.618792   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> /etc/test/nested/copy/4120/hosts
	I1213 08:47:56.618842   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1213 08:47:56.626162   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:47:56.643608   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1213 08:47:56.661460   47783 start.go:296] duration metric: took 166.811201ms for postStartSetup
	I1213 08:47:56.661553   47783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:47:56.661603   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.678627   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.785005   47783 command_runner.go:130] > 14%
	I1213 08:47:56.785418   47783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:47:56.789762   47783 command_runner.go:130] > 169G
	I1213 08:47:56.790146   47783 fix.go:56] duration metric: took 1.204938515s for fixHost
	I1213 08:47:56.790168   47783 start.go:83] releasing machines lock for "functional-074420", held for 1.204983079s
	I1213 08:47:56.790231   47783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:47:56.811813   47783 ssh_runner.go:195] Run: cat /version.json
	I1213 08:47:56.811877   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.812180   47783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:47:56.812227   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.839131   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.843453   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:57.035647   47783 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 08:47:57.038511   47783 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 08:47:57.038690   47783 ssh_runner.go:195] Run: systemctl --version
	I1213 08:47:57.044708   47783 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 08:47:57.044761   47783 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 08:47:57.045134   47783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 08:47:57.049401   47783 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 08:47:57.049443   47783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:47:57.049503   47783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:47:57.057127   47783 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:47:57.057158   47783 start.go:496] detecting cgroup driver to use...
	I1213 08:47:57.057211   47783 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:47:57.057279   47783 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 08:47:57.072743   47783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:47:57.086014   47783 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:47:57.086118   47783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:47:57.102029   47783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:47:57.115088   47783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:47:57.226726   47783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:47:57.347870   47783 docker.go:234] disabling docker service ...
	I1213 08:47:57.347940   47783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:47:57.363202   47783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:47:57.377010   47783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:47:57.506500   47783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:47:57.649131   47783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:47:57.662497   47783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:47:57.677018   47783 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 08:47:57.678207   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:47:57.688555   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 08:47:57.698272   47783 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:47:57.698370   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:47:57.707500   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:47:57.716692   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:47:57.725739   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:47:57.734886   47783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:47:57.743485   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:47:57.753073   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:47:57.761993   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:47:57.770719   47783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:47:57.777695   47783 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 08:47:57.778683   47783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:47:57.786237   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:57.908393   47783 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:47:58.046253   47783 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 08:47:58.046368   47783 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 08:47:58.050493   47783 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 08:47:58.050558   47783 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 08:47:58.050578   47783 command_runner.go:130] > Device: 0,73	Inode: 1610        Links: 1
	I1213 08:47:58.050603   47783 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:47:58.050636   47783 command_runner.go:130] > Access: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050663   47783 command_runner.go:130] > Modify: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050685   47783 command_runner.go:130] > Change: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050720   47783 command_runner.go:130] >  Birth: -
	I1213 08:47:58.050927   47783 start.go:564] Will wait 60s for crictl version
	I1213 08:47:58.051002   47783 ssh_runner.go:195] Run: which crictl
	I1213 08:47:58.054661   47783 command_runner.go:130] > /usr/local/bin/crictl
	I1213 08:47:58.054852   47783 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:47:58.077876   47783 command_runner.go:130] > Version:  0.1.0
	I1213 08:47:58.077939   47783 command_runner.go:130] > RuntimeName:  containerd
	I1213 08:47:58.077961   47783 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 08:47:58.077985   47783 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 08:47:58.080051   47783 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 08:47:58.080159   47783 ssh_runner.go:195] Run: containerd --version
	I1213 08:47:58.100302   47783 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 08:47:58.101953   47783 ssh_runner.go:195] Run: containerd --version
	I1213 08:47:58.119235   47783 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 08:47:58.126521   47783 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 08:47:58.129463   47783 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:47:58.145273   47783 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 08:47:58.149369   47783 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 08:47:58.149453   47783 kubeadm.go:884] updating cluster {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:47:58.149580   47783 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:47:58.149657   47783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:47:58.174191   47783 command_runner.go:130] > {
	I1213 08:47:58.174214   47783 command_runner.go:130] >   "images":  [
	I1213 08:47:58.174219   47783 command_runner.go:130] >     {
	I1213 08:47:58.174232   47783 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 08:47:58.174237   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174242   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 08:47:58.174246   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174250   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174259   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 08:47:58.174263   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174267   47783 command_runner.go:130] >       "size":  "40636774",
	I1213 08:47:58.174271   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174275   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174278   47783 command_runner.go:130] >     },
	I1213 08:47:58.174281   47783 command_runner.go:130] >     {
	I1213 08:47:58.174289   47783 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 08:47:58.174299   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174305   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 08:47:58.174308   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174313   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174321   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 08:47:58.174328   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174332   47783 command_runner.go:130] >       "size":  "8034419",
	I1213 08:47:58.174336   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174340   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174343   47783 command_runner.go:130] >     },
	I1213 08:47:58.174349   47783 command_runner.go:130] >     {
	I1213 08:47:58.174356   47783 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 08:47:58.174361   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174366   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 08:47:58.174371   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174384   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174395   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 08:47:58.174399   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174403   47783 command_runner.go:130] >       "size":  "21168808",
	I1213 08:47:58.174409   47783 command_runner.go:130] >       "username":  "nonroot",
	I1213 08:47:58.174417   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174421   47783 command_runner.go:130] >     },
	I1213 08:47:58.174424   47783 command_runner.go:130] >     {
	I1213 08:47:58.174430   47783 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 08:47:58.174436   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174441   47783 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 08:47:58.174444   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174449   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174458   47783 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 08:47:58.174464   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174468   47783 command_runner.go:130] >       "size":  "21136588",
	I1213 08:47:58.174472   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174475   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174478   47783 command_runner.go:130] >       },
	I1213 08:47:58.174487   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174491   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174494   47783 command_runner.go:130] >     },
	I1213 08:47:58.174497   47783 command_runner.go:130] >     {
	I1213 08:47:58.174507   47783 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 08:47:58.174511   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174518   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 08:47:58.174522   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174526   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174533   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 08:47:58.174539   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174545   47783 command_runner.go:130] >       "size":  "24678359",
	I1213 08:47:58.174551   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174559   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174562   47783 command_runner.go:130] >       },
	I1213 08:47:58.174566   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174576   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174580   47783 command_runner.go:130] >     },
	I1213 08:47:58.174584   47783 command_runner.go:130] >     {
	I1213 08:47:58.174594   47783 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 08:47:58.174601   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174607   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 08:47:58.174610   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174614   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174625   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 08:47:58.174631   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174635   47783 command_runner.go:130] >       "size":  "20661043",
	I1213 08:47:58.174638   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174642   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174645   47783 command_runner.go:130] >       },
	I1213 08:47:58.174649   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174655   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174659   47783 command_runner.go:130] >     },
	I1213 08:47:58.174663   47783 command_runner.go:130] >     {
	I1213 08:47:58.174671   47783 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 08:47:58.174677   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174681   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 08:47:58.174684   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174688   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174699   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 08:47:58.174704   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174709   47783 command_runner.go:130] >       "size":  "22429671",
	I1213 08:47:58.174713   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174716   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174721   47783 command_runner.go:130] >     },
	I1213 08:47:58.174725   47783 command_runner.go:130] >     {
	I1213 08:47:58.174732   47783 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 08:47:58.174742   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174747   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 08:47:58.174753   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174757   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174765   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 08:47:58.174774   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174781   47783 command_runner.go:130] >       "size":  "15391364",
	I1213 08:47:58.174784   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174788   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174797   47783 command_runner.go:130] >       },
	I1213 08:47:58.174802   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174808   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174811   47783 command_runner.go:130] >     },
	I1213 08:47:58.174814   47783 command_runner.go:130] >     {
	I1213 08:47:58.174821   47783 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 08:47:58.174828   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174833   47783 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 08:47:58.174836   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174840   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174848   47783 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 08:47:58.174851   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174855   47783 command_runner.go:130] >       "size":  "267939",
	I1213 08:47:58.174860   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174864   47783 command_runner.go:130] >         "value":  "65535"
	I1213 08:47:58.174875   47783 command_runner.go:130] >       },
	I1213 08:47:58.174880   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174884   47783 command_runner.go:130] >       "pinned":  true
	I1213 08:47:58.174887   47783 command_runner.go:130] >     }
	I1213 08:47:58.174890   47783 command_runner.go:130] >   ]
	I1213 08:47:58.174893   47783 command_runner.go:130] > }
	I1213 08:47:58.175043   47783 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:47:58.175056   47783 containerd.go:534] Images already preloaded, skipping extraction
	I1213 08:47:58.175117   47783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:47:58.196592   47783 command_runner.go:130] > {
	I1213 08:47:58.196612   47783 command_runner.go:130] >   "images":  [
	I1213 08:47:58.196616   47783 command_runner.go:130] >     {
	I1213 08:47:58.196626   47783 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 08:47:58.196631   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196637   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 08:47:58.196641   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196644   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196654   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 08:47:58.196660   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196664   47783 command_runner.go:130] >       "size":  "40636774",
	I1213 08:47:58.196674   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196678   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196682   47783 command_runner.go:130] >     },
	I1213 08:47:58.196685   47783 command_runner.go:130] >     {
	I1213 08:47:58.196701   47783 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 08:47:58.196710   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196715   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 08:47:58.196719   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196723   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196732   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 08:47:58.196739   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196745   47783 command_runner.go:130] >       "size":  "8034419",
	I1213 08:47:58.196753   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196757   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196764   47783 command_runner.go:130] >     },
	I1213 08:47:58.196768   47783 command_runner.go:130] >     {
	I1213 08:47:58.196782   47783 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 08:47:58.196787   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196793   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 08:47:58.196798   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196807   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196825   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 08:47:58.196833   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196838   47783 command_runner.go:130] >       "size":  "21168808",
	I1213 08:47:58.196847   47783 command_runner.go:130] >       "username":  "nonroot",
	I1213 08:47:58.196852   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196861   47783 command_runner.go:130] >     },
	I1213 08:47:58.196864   47783 command_runner.go:130] >     {
	I1213 08:47:58.196871   47783 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 08:47:58.196875   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196880   47783 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 08:47:58.196884   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196888   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196897   47783 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 08:47:58.196904   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196908   47783 command_runner.go:130] >       "size":  "21136588",
	I1213 08:47:58.196912   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.196916   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.196924   47783 command_runner.go:130] >       },
	I1213 08:47:58.196929   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196936   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196940   47783 command_runner.go:130] >     },
	I1213 08:47:58.196943   47783 command_runner.go:130] >     {
	I1213 08:47:58.196953   47783 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 08:47:58.196958   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196963   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 08:47:58.196968   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196973   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196984   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 08:47:58.196993   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196998   47783 command_runner.go:130] >       "size":  "24678359",
	I1213 08:47:58.197005   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197015   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197022   47783 command_runner.go:130] >       },
	I1213 08:47:58.197030   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197034   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197037   47783 command_runner.go:130] >     },
	I1213 08:47:58.197040   47783 command_runner.go:130] >     {
	I1213 08:47:58.197048   47783 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 08:47:58.197056   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197063   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 08:47:58.197069   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197074   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197086   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 08:47:58.197094   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197098   47783 command_runner.go:130] >       "size":  "20661043",
	I1213 08:47:58.197105   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197109   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197113   47783 command_runner.go:130] >       },
	I1213 08:47:58.197117   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197121   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197126   47783 command_runner.go:130] >     },
	I1213 08:47:58.197129   47783 command_runner.go:130] >     {
	I1213 08:47:58.197140   47783 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 08:47:58.197144   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197154   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 08:47:58.197158   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197162   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197173   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 08:47:58.197180   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197185   47783 command_runner.go:130] >       "size":  "22429671",
	I1213 08:47:58.197189   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197194   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197198   47783 command_runner.go:130] >     },
	I1213 08:47:58.197201   47783 command_runner.go:130] >     {
	I1213 08:47:58.197209   47783 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 08:47:58.197216   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197225   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 08:47:58.197232   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197237   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197248   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 08:47:58.197255   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197259   47783 command_runner.go:130] >       "size":  "15391364",
	I1213 08:47:58.197266   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197270   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197273   47783 command_runner.go:130] >       },
	I1213 08:47:58.197279   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197283   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197286   47783 command_runner.go:130] >     },
	I1213 08:47:58.197289   47783 command_runner.go:130] >     {
	I1213 08:47:58.197296   47783 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 08:47:58.197304   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197309   47783 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 08:47:58.197313   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197320   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197329   47783 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 08:47:58.197335   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197339   47783 command_runner.go:130] >       "size":  "267939",
	I1213 08:47:58.197346   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197351   47783 command_runner.go:130] >         "value":  "65535"
	I1213 08:47:58.197356   47783 command_runner.go:130] >       },
	I1213 08:47:58.197362   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197366   47783 command_runner.go:130] >       "pinned":  true
	I1213 08:47:58.197369   47783 command_runner.go:130] >     }
	I1213 08:47:58.197372   47783 command_runner.go:130] >   ]
	I1213 08:47:58.197375   47783 command_runner.go:130] > }
	I1213 08:47:58.199421   47783 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:47:58.199439   47783 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:47:58.199455   47783 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 08:47:58.199601   47783 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-074420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:47:58.199669   47783 ssh_runner.go:195] Run: sudo crictl info
	I1213 08:47:58.226280   47783 command_runner.go:130] > {
	I1213 08:47:58.226303   47783 command_runner.go:130] >   "cniconfig": {
	I1213 08:47:58.226310   47783 command_runner.go:130] >     "Networks": [
	I1213 08:47:58.226314   47783 command_runner.go:130] >       {
	I1213 08:47:58.226319   47783 command_runner.go:130] >         "Config": {
	I1213 08:47:58.226324   47783 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 08:47:58.226329   47783 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 08:47:58.226333   47783 command_runner.go:130] >           "Plugins": [
	I1213 08:47:58.226336   47783 command_runner.go:130] >             {
	I1213 08:47:58.226340   47783 command_runner.go:130] >               "Network": {
	I1213 08:47:58.226344   47783 command_runner.go:130] >                 "ipam": {},
	I1213 08:47:58.226350   47783 command_runner.go:130] >                 "type": "loopback"
	I1213 08:47:58.226358   47783 command_runner.go:130] >               },
	I1213 08:47:58.226364   47783 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 08:47:58.226371   47783 command_runner.go:130] >             }
	I1213 08:47:58.226374   47783 command_runner.go:130] >           ],
	I1213 08:47:58.226384   47783 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 08:47:58.226388   47783 command_runner.go:130] >         },
	I1213 08:47:58.226398   47783 command_runner.go:130] >         "IFName": "lo"
	I1213 08:47:58.226402   47783 command_runner.go:130] >       }
	I1213 08:47:58.226405   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226410   47783 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 08:47:58.226415   47783 command_runner.go:130] >     "PluginDirs": [
	I1213 08:47:58.226419   47783 command_runner.go:130] >       "/opt/cni/bin"
	I1213 08:47:58.226425   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226430   47783 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 08:47:58.226442   47783 command_runner.go:130] >     "Prefix": "eth"
	I1213 08:47:58.226445   47783 command_runner.go:130] >   },
	I1213 08:47:58.226448   47783 command_runner.go:130] >   "config": {
	I1213 08:47:58.226454   47783 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 08:47:58.226459   47783 command_runner.go:130] >       "/etc/cdi",
	I1213 08:47:58.226466   47783 command_runner.go:130] >       "/var/run/cdi"
	I1213 08:47:58.226472   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226480   47783 command_runner.go:130] >     "cni": {
	I1213 08:47:58.226484   47783 command_runner.go:130] >       "binDir": "",
	I1213 08:47:58.226487   47783 command_runner.go:130] >       "binDirs": [
	I1213 08:47:58.226491   47783 command_runner.go:130] >         "/opt/cni/bin"
	I1213 08:47:58.226495   47783 command_runner.go:130] >       ],
	I1213 08:47:58.226499   47783 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 08:47:58.226503   47783 command_runner.go:130] >       "confTemplate": "",
	I1213 08:47:58.226507   47783 command_runner.go:130] >       "ipPref": "",
	I1213 08:47:58.226510   47783 command_runner.go:130] >       "maxConfNum": 1,
	I1213 08:47:58.226514   47783 command_runner.go:130] >       "setupSerially": false,
	I1213 08:47:58.226519   47783 command_runner.go:130] >       "useInternalLoopback": false
	I1213 08:47:58.226524   47783 command_runner.go:130] >     },
	I1213 08:47:58.226530   47783 command_runner.go:130] >     "containerd": {
	I1213 08:47:58.226538   47783 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 08:47:58.226543   47783 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 08:47:58.226548   47783 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 08:47:58.226552   47783 command_runner.go:130] >       "runtimes": {
	I1213 08:47:58.226557   47783 command_runner.go:130] >         "runc": {
	I1213 08:47:58.226562   47783 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 08:47:58.226566   47783 command_runner.go:130] >           "PodAnnotations": null,
	I1213 08:47:58.226570   47783 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 08:47:58.226575   47783 command_runner.go:130] >           "cgroupWritable": false,
	I1213 08:47:58.226580   47783 command_runner.go:130] >           "cniConfDir": "",
	I1213 08:47:58.226586   47783 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 08:47:58.226591   47783 command_runner.go:130] >           "io_type": "",
	I1213 08:47:58.226596   47783 command_runner.go:130] >           "options": {
	I1213 08:47:58.226601   47783 command_runner.go:130] >             "BinaryName": "",
	I1213 08:47:58.226607   47783 command_runner.go:130] >             "CriuImagePath": "",
	I1213 08:47:58.226612   47783 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 08:47:58.226616   47783 command_runner.go:130] >             "IoGid": 0,
	I1213 08:47:58.226620   47783 command_runner.go:130] >             "IoUid": 0,
	I1213 08:47:58.226629   47783 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 08:47:58.226633   47783 command_runner.go:130] >             "Root": "",
	I1213 08:47:58.226641   47783 command_runner.go:130] >             "ShimCgroup": "",
	I1213 08:47:58.226649   47783 command_runner.go:130] >             "SystemdCgroup": false
	I1213 08:47:58.226652   47783 command_runner.go:130] >           },
	I1213 08:47:58.226657   47783 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 08:47:58.226666   47783 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 08:47:58.226678   47783 command_runner.go:130] >           "runtimePath": "",
	I1213 08:47:58.226683   47783 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 08:47:58.226689   47783 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 08:47:58.226698   47783 command_runner.go:130] >           "snapshotter": ""
	I1213 08:47:58.226702   47783 command_runner.go:130] >         }
	I1213 08:47:58.226705   47783 command_runner.go:130] >       }
	I1213 08:47:58.226710   47783 command_runner.go:130] >     },
	I1213 08:47:58.226721   47783 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 08:47:58.226728   47783 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 08:47:58.226735   47783 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 08:47:58.226739   47783 command_runner.go:130] >     "disableApparmor": false,
	I1213 08:47:58.226744   47783 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 08:47:58.226748   47783 command_runner.go:130] >     "disableProcMount": false,
	I1213 08:47:58.226753   47783 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 08:47:58.226759   47783 command_runner.go:130] >     "enableCDI": true,
	I1213 08:47:58.226763   47783 command_runner.go:130] >     "enableSelinux": false,
	I1213 08:47:58.226769   47783 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 08:47:58.226775   47783 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 08:47:58.226782   47783 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 08:47:58.226787   47783 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 08:47:58.226797   47783 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 08:47:58.226806   47783 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 08:47:58.226811   47783 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 08:47:58.226819   47783 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 08:47:58.226824   47783 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 08:47:58.226830   47783 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 08:47:58.226837   47783 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 08:47:58.226843   47783 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 08:47:58.226853   47783 command_runner.go:130] >   },
	I1213 08:47:58.226860   47783 command_runner.go:130] >   "features": {
	I1213 08:47:58.226865   47783 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 08:47:58.226868   47783 command_runner.go:130] >   },
	I1213 08:47:58.226872   47783 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 08:47:58.226884   47783 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 08:47:58.226898   47783 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 08:47:58.226903   47783 command_runner.go:130] >   "runtimeHandlers": [
	I1213 08:47:58.226906   47783 command_runner.go:130] >     {
	I1213 08:47:58.226910   47783 command_runner.go:130] >       "features": {
	I1213 08:47:58.226915   47783 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 08:47:58.226921   47783 command_runner.go:130] >         "user_namespaces": true
	I1213 08:47:58.226925   47783 command_runner.go:130] >       }
	I1213 08:47:58.226928   47783 command_runner.go:130] >     },
	I1213 08:47:58.226934   47783 command_runner.go:130] >     {
	I1213 08:47:58.226938   47783 command_runner.go:130] >       "features": {
	I1213 08:47:58.226946   47783 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 08:47:58.226958   47783 command_runner.go:130] >         "user_namespaces": true
	I1213 08:47:58.226962   47783 command_runner.go:130] >       },
	I1213 08:47:58.226965   47783 command_runner.go:130] >       "name": "runc"
	I1213 08:47:58.226968   47783 command_runner.go:130] >     }
	I1213 08:47:58.226971   47783 command_runner.go:130] >   ],
	I1213 08:47:58.226976   47783 command_runner.go:130] >   "status": {
	I1213 08:47:58.226984   47783 command_runner.go:130] >     "conditions": [
	I1213 08:47:58.226989   47783 command_runner.go:130] >       {
	I1213 08:47:58.226993   47783 command_runner.go:130] >         "message": "",
	I1213 08:47:58.226997   47783 command_runner.go:130] >         "reason": "",
	I1213 08:47:58.227001   47783 command_runner.go:130] >         "status": true,
	I1213 08:47:58.227009   47783 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 08:47:58.227015   47783 command_runner.go:130] >       },
	I1213 08:47:58.227019   47783 command_runner.go:130] >       {
	I1213 08:47:58.227033   47783 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 08:47:58.227038   47783 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 08:47:58.227046   47783 command_runner.go:130] >         "status": false,
	I1213 08:47:58.227054   47783 command_runner.go:130] >         "type": "NetworkReady"
	I1213 08:47:58.227057   47783 command_runner.go:130] >       },
	I1213 08:47:58.227060   47783 command_runner.go:130] >       {
	I1213 08:47:58.227083   47783 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 08:47:58.227094   47783 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 08:47:58.227100   47783 command_runner.go:130] >         "status": false,
	I1213 08:47:58.227106   47783 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 08:47:58.227111   47783 command_runner.go:130] >       }
	I1213 08:47:58.227115   47783 command_runner.go:130] >     ]
	I1213 08:47:58.227118   47783 command_runner.go:130] >   }
	I1213 08:47:58.227121   47783 command_runner.go:130] > }
	I1213 08:47:58.229345   47783 cni.go:84] Creating CNI manager for ""
	I1213 08:47:58.229369   47783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:47:58.229387   47783 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:47:58.229409   47783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-074420 NodeName:functional-074420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:47:58.229527   47783 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-074420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:47:58.229596   47783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:47:58.237061   47783 command_runner.go:130] > kubeadm
	I1213 08:47:58.237081   47783 command_runner.go:130] > kubectl
	I1213 08:47:58.237086   47783 command_runner.go:130] > kubelet
	I1213 08:47:58.237099   47783 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:47:58.237151   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:47:58.244326   47783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 08:47:58.256951   47783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:47:58.269808   47783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 08:47:58.282145   47783 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:47:58.286872   47783 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 08:47:58.287376   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:58.410199   47783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:47:59.022103   47783 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420 for IP: 192.168.49.2
	I1213 08:47:59.022125   47783 certs.go:195] generating shared ca certs ...
	I1213 08:47:59.022141   47783 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.022352   47783 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 08:47:59.022424   47783 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 08:47:59.022444   47783 certs.go:257] generating profile certs ...
	I1213 08:47:59.022584   47783 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key
	I1213 08:47:59.022699   47783 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068
	I1213 08:47:59.022768   47783 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key
	I1213 08:47:59.022808   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 08:47:59.022855   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 08:47:59.022876   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 08:47:59.022904   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 08:47:59.022937   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 08:47:59.022973   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 08:47:59.022995   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 08:47:59.023008   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 08:47:59.023095   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 08:47:59.023154   47783 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 08:47:59.023166   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:47:59.023224   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 08:47:59.023288   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:47:59.023328   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 08:47:59.023408   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:47:59.023471   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem -> /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.023492   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.023541   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.024142   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:47:59.045491   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:47:59.066181   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:47:59.087256   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 08:47:59.105383   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:47:59.122457   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:47:59.141188   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:47:59.160057   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:47:59.177518   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 08:47:59.194757   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 08:47:59.211990   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:47:59.231728   47783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:47:59.244528   47783 ssh_runner.go:195] Run: openssl version
	I1213 08:47:59.250389   47783 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 08:47:59.250777   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.258690   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 08:47:59.266115   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269715   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269750   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269798   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.310445   47783 command_runner.go:130] > 51391683
	I1213 08:47:59.310954   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:47:59.318044   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.325154   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 08:47:59.332532   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336318   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336361   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336416   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.376950   47783 command_runner.go:130] > 3ec20f2e
	I1213 08:47:59.377430   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:47:59.384916   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.392420   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:47:59.399763   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403540   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403584   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403630   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.443918   47783 command_runner.go:130] > b5213941
	I1213 08:47:59.444419   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:47:59.451702   47783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:47:59.455380   47783 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:47:59.455462   47783 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 08:47:59.455488   47783 command_runner.go:130] > Device: 259,1	Inode: 1311318     Links: 1
	I1213 08:47:59.455502   47783 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:47:59.455526   47783 command_runner.go:130] > Access: 2025-12-13 08:43:51.909308195 +0000
	I1213 08:47:59.455533   47783 command_runner.go:130] > Modify: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455538   47783 command_runner.go:130] > Change: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455544   47783 command_runner.go:130] >  Birth: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455631   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:47:59.496226   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.496712   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:47:59.538384   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.538813   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:47:59.584114   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.584598   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:47:59.624635   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.625106   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:47:59.665474   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.665947   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:47:59.706066   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.706546   47783 kubeadm.go:401] StartCluster: {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:59.706648   47783 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 08:47:59.706732   47783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:47:59.735062   47783 cri.go:89] found id: ""
	I1213 08:47:59.735134   47783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:47:59.742080   47783 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 08:47:59.742103   47783 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 08:47:59.742110   47783 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 08:47:59.743039   47783 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:47:59.743056   47783 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:47:59.743123   47783 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:47:59.750746   47783 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:47:59.751192   47783 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-074420" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.751301   47783 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "functional-074420" cluster setting kubeconfig missing "functional-074420" context setting]
	I1213 08:47:59.751688   47783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.752162   47783 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.752336   47783 kapi.go:59] client config for functional-074420: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key", CAFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:47:59.752888   47783 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 08:47:59.752908   47783 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 08:47:59.752914   47783 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 08:47:59.752919   47783 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 08:47:59.752923   47783 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 08:47:59.753010   47783 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:47:59.753251   47783 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:47:59.761240   47783 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 08:47:59.761275   47783 kubeadm.go:602] duration metric: took 18.213538ms to restartPrimaryControlPlane
	I1213 08:47:59.761286   47783 kubeadm.go:403] duration metric: took 54.748002ms to StartCluster
	I1213 08:47:59.761334   47783 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.761412   47783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.762024   47783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.762236   47783 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 08:47:59.762588   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:59.762635   47783 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 08:47:59.762697   47783 addons.go:70] Setting storage-provisioner=true in profile "functional-074420"
	I1213 08:47:59.762710   47783 addons.go:239] Setting addon storage-provisioner=true in "functional-074420"
	I1213 08:47:59.762736   47783 host.go:66] Checking if "functional-074420" exists ...
	I1213 08:47:59.762848   47783 addons.go:70] Setting default-storageclass=true in profile "functional-074420"
	I1213 08:47:59.762897   47783 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-074420"
	I1213 08:47:59.763226   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.763230   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.768637   47783 out.go:179] * Verifying Kubernetes components...
	I1213 08:47:59.771460   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:59.801964   47783 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.802130   47783 kapi.go:59] client config for functional-074420: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key", CAFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:47:59.802416   47783 addons.go:239] Setting addon default-storageclass=true in "functional-074420"
	I1213 08:47:59.802452   47783 host.go:66] Checking if "functional-074420" exists ...
	I1213 08:47:59.802879   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.817615   47783 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:47:59.820407   47783 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:47:59.820438   47783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:47:59.820510   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:59.832904   47783 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:47:59.832927   47783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:47:59.832987   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:59.858620   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:59.867019   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:48:00.019931   47783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:48:00.079586   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:00.079699   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:00.772755   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.772781   47783 node_ready.go:35] waiting up to 6m0s for node "functional-074420" to be "Ready" ...
	W1213 08:48:00.772842   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.772962   47783 retry.go:31] will retry after 342.791424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773029   47783 type.go:168] "Request Body" body=""
	I1213 08:48:00.773112   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:00.773133   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773144   47783 retry.go:31] will retry after 244.896783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:00.773482   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.019052   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.079123   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.079165   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.079186   47783 retry.go:31] will retry after 233.412949ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.116509   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:01.177616   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.181525   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.181562   47783 retry.go:31] will retry after 544.217788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.273820   47783 type.go:168] "Request Body" body=""
	I1213 08:48:01.273908   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:01.274281   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.313528   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.373257   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.376997   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.377026   47783 retry.go:31] will retry after 483.901383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.726523   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:01.774029   47783 type.go:168] "Request Body" body=""
	I1213 08:48:01.774123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:01.774536   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.788802   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.792516   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.792575   47783 retry.go:31] will retry after 627.991267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.861830   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.921846   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.925982   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.926017   47783 retry.go:31] will retry after 1.103907842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.273073   47783 type.go:168] "Request Body" body=""
	I1213 08:48:02.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:02.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:02.420977   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:02.487960   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:02.491818   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.491849   47783 retry.go:31] will retry after 452.917795ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.773092   47783 type.go:168] "Request Body" body=""
	I1213 08:48:02.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:02.773507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:02.773572   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:02.945881   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:03.009201   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:03.013021   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.013052   47783 retry.go:31] will retry after 1.276929732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.030115   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:03.100586   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:03.104547   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.104578   47783 retry.go:31] will retry after 1.048810244s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.273922   47783 type.go:168] "Request Body" body=""
	I1213 08:48:03.274012   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:03.274318   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:03.773006   47783 type.go:168] "Request Body" body=""
	I1213 08:48:03.773078   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:03.773422   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:04.153636   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:04.212539   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:04.212608   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.212632   47783 retry.go:31] will retry after 1.498415757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.273795   47783 type.go:168] "Request Body" body=""
	I1213 08:48:04.273919   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:04.274275   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:04.290503   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:04.351966   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:04.352013   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.352031   47783 retry.go:31] will retry after 2.776026758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.773561   47783 type.go:168] "Request Body" body=""
	I1213 08:48:04.773631   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:04.773950   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:04.774040   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:05.273769   47783 type.go:168] "Request Body" body=""
	I1213 08:48:05.273843   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:05.274174   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:05.711960   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:05.773532   47783 type.go:168] "Request Body" body=""
	I1213 08:48:05.773602   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:05.773904   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:05.778452   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:05.778491   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:05.778510   47783 retry.go:31] will retry after 3.257875901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:06.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:48:06.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:06.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:06.773209   47783 type.go:168] "Request Body" body=""
	I1213 08:48:06.773292   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:06.773624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:07.129286   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:07.188224   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:07.188280   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:07.188301   47783 retry.go:31] will retry after 1.575099921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:07.273578   47783 type.go:168] "Request Body" body=""
	I1213 08:48:07.273669   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:07.273988   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:07.274044   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:07.773778   47783 type.go:168] "Request Body" body=""
	I1213 08:48:07.773852   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:07.774188   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.273837   47783 type.go:168] "Request Body" body=""
	I1213 08:48:08.273926   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:08.274179   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.763743   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:08.773132   47783 type.go:168] "Request Body" body=""
	I1213 08:48:08.773211   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:08.773479   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.823924   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:08.827716   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:08.827745   47783 retry.go:31] will retry after 4.082199617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.037077   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:09.107584   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:09.107627   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.107646   47783 retry.go:31] will retry after 4.733469164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.273965   47783 type.go:168] "Request Body" body=""
	I1213 08:48:09.274042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:09.274370   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:09.274422   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:09.773216   47783 type.go:168] "Request Body" body=""
	I1213 08:48:09.773289   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:09.773540   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:10.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:48:10.273139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:10.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:10.773111   47783 type.go:168] "Request Body" body=""
	I1213 08:48:10.773192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:10.773561   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:11.272986   47783 type.go:168] "Request Body" body=""
	I1213 08:48:11.273056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:11.273307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:11.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:48:11.773117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:11.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:11.773571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:12.273226   47783 type.go:168] "Request Body" body=""
	I1213 08:48:12.273318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:12.273600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:12.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:48:12.773101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:12.773360   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:12.910787   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:12.972202   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:12.972251   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:12.972270   47783 retry.go:31] will retry after 8.911795338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:13.273667   47783 type.go:168] "Request Body" body=""
	I1213 08:48:13.273742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:13.274062   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:13.773915   47783 type.go:168] "Request Body" body=""
	I1213 08:48:13.773987   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:13.774307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:13.774364   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:13.841699   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:13.900246   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:13.900294   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:13.900313   47783 retry.go:31] will retry after 6.419298699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:14.273688   47783 type.go:168] "Request Body" body=""
	I1213 08:48:14.273763   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:14.274022   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:14.773814   47783 type.go:168] "Request Body" body=""
	I1213 08:48:14.773891   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:14.774197   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:15.273923   47783 type.go:168] "Request Body" body=""
	I1213 08:48:15.273993   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:15.274354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:15.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:48:15.773092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:15.773436   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:16.273052   47783 type.go:168] "Request Body" body=""
	I1213 08:48:16.273127   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:16.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:16.273499   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:16.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:48:16.773162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:16.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:17.272982   47783 type.go:168] "Request Body" body=""
	I1213 08:48:17.273050   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:17.273294   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:17.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:48:17.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:17.773414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:18.273126   47783 type.go:168] "Request Body" body=""
	I1213 08:48:18.273210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:18.273554   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:18.273611   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:18.773038   47783 type.go:168] "Request Body" body=""
	I1213 08:48:18.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:18.773365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:19.273077   47783 type.go:168] "Request Body" body=""
	I1213 08:48:19.273151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:19.273502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:19.773409   47783 type.go:168] "Request Body" body=""
	I1213 08:48:19.773482   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:19.773764   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:20.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:48:20.273167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:20.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:20.320652   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:20.382818   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:20.382863   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:20.382884   47783 retry.go:31] will retry after 5.774410243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:20.773290   47783 type.go:168] "Request Body" body=""
	I1213 08:48:20.773364   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:20.773699   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:20.773754   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:21.273419   47783 type.go:168] "Request Body" body=""
	I1213 08:48:21.273508   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:21.273838   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:21.773521   47783 type.go:168] "Request Body" body=""
	I1213 08:48:21.773588   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:21.773835   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:21.885194   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:21.947231   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:21.947284   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:21.947318   47783 retry.go:31] will retry after 10.220008645s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:22.273767   47783 type.go:168] "Request Body" body=""
	I1213 08:48:22.273840   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:22.274159   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:22.773949   47783 type.go:168] "Request Body" body=""
	I1213 08:48:22.774022   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:22.774282   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:22.774333   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:23.273005   47783 type.go:168] "Request Body" body=""
	I1213 08:48:23.273085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:23.273357   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:23.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:48:23.773140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:23.773454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:24.273095   47783 type.go:168] "Request Body" body=""
	I1213 08:48:24.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:24.273534   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:24.773021   47783 type.go:168] "Request Body" body=""
	I1213 08:48:24.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:24.773394   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:25.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:48:25.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:25.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:25.273549   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:25.773403   47783 type.go:168] "Request Body" body=""
	I1213 08:48:25.773494   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:25.773798   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:26.158458   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:26.211883   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:26.215285   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:26.215316   47783 retry.go:31] will retry after 15.443420543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:26.273497   47783 type.go:168] "Request Body" body=""
	I1213 08:48:26.273568   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:26.273871   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:26.773647   47783 type.go:168] "Request Body" body=""
	I1213 08:48:26.773724   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:26.774089   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:27.273764   47783 type.go:168] "Request Body" body=""
	I1213 08:48:27.273848   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:27.274199   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:27.274258   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:27.773967   47783 type.go:168] "Request Body" body=""
	I1213 08:48:27.774040   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:27.774313   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:28.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:48:28.273130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:28.273472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:28.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:48:28.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:28.773511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:29.272971   47783 type.go:168] "Request Body" body=""
	I1213 08:48:29.273039   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:29.273299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:29.773229   47783 type.go:168] "Request Body" body=""
	I1213 08:48:29.773303   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:29.773634   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:29.773690   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:30.273344   47783 type.go:168] "Request Body" body=""
	I1213 08:48:30.273424   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:30.273761   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:30.773498   47783 type.go:168] "Request Body" body=""
	I1213 08:48:30.773573   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:30.773910   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:31.273704   47783 type.go:168] "Request Body" body=""
	I1213 08:48:31.273780   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:31.274114   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:31.773932   47783 type.go:168] "Request Body" body=""
	I1213 08:48:31.774003   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:31.774336   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:31.774389   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:32.167590   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:32.226722   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:32.226762   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:32.226781   47783 retry.go:31] will retry after 8.254164246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:32.273897   47783 type.go:168] "Request Body" body=""
	I1213 08:48:32.273972   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:32.274230   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:32.772997   47783 type.go:168] "Request Body" body=""
	I1213 08:48:32.773100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:32.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:33.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:48:33.273177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:33.273513   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:33.773160   47783 type.go:168] "Request Body" body=""
	I1213 08:48:33.773250   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:33.773621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:34.273105   47783 type.go:168] "Request Body" body=""
	I1213 08:48:34.273213   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:34.273537   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:34.273589   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:34.773220   47783 type.go:168] "Request Body" body=""
	I1213 08:48:34.773295   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:34.773625   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:35.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:48:35.273116   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:35.273367   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:35.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:48:35.773150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:35.773476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:36.273986   47783 type.go:168] "Request Body" body=""
	I1213 08:48:36.274079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:36.274424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:36.274479   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:36.773035   47783 type.go:168] "Request Body" body=""
	I1213 08:48:36.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:36.773358   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:37.273041   47783 type.go:168] "Request Body" body=""
	I1213 08:48:37.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:37.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:37.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:48:37.773130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:37.773446   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:38.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:48:38.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:38.273423   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:38.773129   47783 type.go:168] "Request Body" body=""
	I1213 08:48:38.773205   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:38.773535   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:38.773618   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:39.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:48:39.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:39.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:39.773505   47783 type.go:168] "Request Body" body=""
	I1213 08:48:39.773593   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:39.773873   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:40.273652   47783 type.go:168] "Request Body" body=""
	I1213 08:48:40.273731   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:40.274095   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:40.481720   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:40.548346   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:40.548381   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:40.548399   47783 retry.go:31] will retry after 23.072803829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:40.773873   47783 type.go:168] "Request Body" body=""
	I1213 08:48:40.773944   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:40.774217   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:40.774266   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:41.273996   47783 type.go:168] "Request Body" body=""
	I1213 08:48:41.274066   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:41.274319   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:41.658979   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:41.720805   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:41.720849   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:41.720869   47783 retry.go:31] will retry after 14.236359641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:41.774005   47783 type.go:168] "Request Body" body=""
	I1213 08:48:41.774085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:41.774430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:42.273146   47783 type.go:168] "Request Body" body=""
	I1213 08:48:42.273232   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:42.273601   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:42.773152   47783 type.go:168] "Request Body" body=""
	I1213 08:48:42.773219   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:42.773482   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:43.273054   47783 type.go:168] "Request Body" body=""
	I1213 08:48:43.273159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:43.273484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:43.273541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:43.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:48:43.773275   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:43.773578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:44.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:48:44.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:44.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:44.773108   47783 type.go:168] "Request Body" body=""
	I1213 08:48:44.773180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:44.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:45.273251   47783 type.go:168] "Request Body" body=""
	I1213 08:48:45.273329   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:45.273651   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:45.273709   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:45.773677   47783 type.go:168] "Request Body" body=""
	I1213 08:48:45.773758   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:45.774018   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:46.273828   47783 type.go:168] "Request Body" body=""
	I1213 08:48:46.273897   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:46.274229   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:46.772972   47783 type.go:168] "Request Body" body=""
	I1213 08:48:46.773043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:46.773362   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:47.273045   47783 type.go:168] "Request Body" body=""
	I1213 08:48:47.273126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:47.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:47.773139   47783 type.go:168] "Request Body" body=""
	I1213 08:48:47.773219   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:47.773579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:47.773642   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:48.273287   47783 type.go:168] "Request Body" body=""
	I1213 08:48:48.273371   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:48.273677   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:48.773341   47783 type.go:168] "Request Body" body=""
	I1213 08:48:48.773416   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:48.773680   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:49.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:48:49.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:49.273506   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:49.773247   47783 type.go:168] "Request Body" body=""
	I1213 08:48:49.773323   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:49.773653   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:49.773705   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:50.273353   47783 type.go:168] "Request Body" body=""
	I1213 08:48:50.273427   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:50.273741   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:50.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:48:50.773764   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:50.774083   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:51.273888   47783 type.go:168] "Request Body" body=""
	I1213 08:48:51.273963   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:51.274309   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:51.773836   47783 type.go:168] "Request Body" body=""
	I1213 08:48:51.773911   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:51.774215   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:51.774264   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:52.273985   47783 type.go:168] "Request Body" body=""
	I1213 08:48:52.274074   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:52.274430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:52.773081   47783 type.go:168] "Request Body" body=""
	I1213 08:48:52.773168   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:52.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:53.273059   47783 type.go:168] "Request Body" body=""
	I1213 08:48:53.273129   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:53.273441   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:53.773125   47783 type.go:168] "Request Body" body=""
	I1213 08:48:53.773200   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:53.773519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:54.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:48:54.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:54.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:54.273482   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:54.773027   47783 type.go:168] "Request Body" body=""
	I1213 08:48:54.773104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:54.773358   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.273041   47783 type.go:168] "Request Body" body=""
	I1213 08:48:55.273121   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:55.273415   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.773058   47783 type.go:168] "Request Body" body=""
	I1213 08:48:55.773138   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:55.773419   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.957869   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:56.020865   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:56.020923   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:56.020946   47783 retry.go:31] will retry after 43.666748427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:56.273019   47783 type.go:168] "Request Body" body=""
	I1213 08:48:56.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:56.273421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:56.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:48:56.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:56.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:56.773548   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:57.273202   47783 type.go:168] "Request Body" body=""
	I1213 08:48:57.273273   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:57.273598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:57.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:48:57.773100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:57.773380   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:58.273085   47783 type.go:168] "Request Body" body=""
	I1213 08:48:58.273180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:58.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:58.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:48:58.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:58.773607   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:58.773665   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:59.273927   47783 type.go:168] "Request Body" body=""
	I1213 08:48:59.273999   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:59.274264   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:59.773208   47783 type.go:168] "Request Body" body=""
	I1213 08:48:59.773283   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:59.773635   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:00.273263   47783 type.go:168] "Request Body" body=""
	I1213 08:49:00.273369   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:00.273817   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:00.773836   47783 type.go:168] "Request Body" body=""
	I1213 08:49:00.773908   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:00.774222   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:00.774277   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:01.274021   47783 type.go:168] "Request Body" body=""
	I1213 08:49:01.274100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:01.274424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:01.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:49:01.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:01.773375   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:02.272965   47783 type.go:168] "Request Body" body=""
	I1213 08:49:02.273041   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:02.273365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:02.773094   47783 type.go:168] "Request Body" body=""
	I1213 08:49:02.773195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:02.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:03.273064   47783 type.go:168] "Request Body" body=""
	I1213 08:49:03.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:03.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:03.273512   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:03.622173   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:49:03.678608   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:03.682133   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:49:03.682162   47783 retry.go:31] will retry after 22.66884586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:49:03.773432   47783 type.go:168] "Request Body" body=""
	I1213 08:49:03.773502   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:03.773795   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:04.273439   47783 type.go:168] "Request Body" body=""
	I1213 08:49:04.273517   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:04.273868   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:04.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:49:04.773210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:04.773461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:05.273151   47783 type.go:168] "Request Body" body=""
	I1213 08:49:05.273221   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:05.273546   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:05.273657   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:05.773252   47783 type.go:168] "Request Body" body=""
	I1213 08:49:05.773325   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:05.773682   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:06.273974   47783 type.go:168] "Request Body" body=""
	I1213 08:49:06.274043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:06.274354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:06.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:49:06.773140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:06.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:07.273190   47783 type.go:168] "Request Body" body=""
	I1213 08:49:07.273272   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:07.273599   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:07.773017   47783 type.go:168] "Request Body" body=""
	I1213 08:49:07.773084   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:07.773327   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:07.773371   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:08.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:49:08.273139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:08.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:08.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:49:08.773269   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:08.773582   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:09.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:49:09.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:09.273483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:09.773186   47783 type.go:168] "Request Body" body=""
	I1213 08:49:09.773263   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:09.773596   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:09.773650   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:10.273086   47783 type.go:168] "Request Body" body=""
	I1213 08:49:10.273160   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:10.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:10.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:49:10.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:10.773368   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:11.273030   47783 type.go:168] "Request Body" body=""
	I1213 08:49:11.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:11.273462   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:11.773903   47783 type.go:168] "Request Body" body=""
	I1213 08:49:11.773985   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:11.774302   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:11.774358   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:12.273000   47783 type.go:168] "Request Body" body=""
	I1213 08:49:12.273072   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:12.273375   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:12.773077   47783 type.go:168] "Request Body" body=""
	I1213 08:49:12.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:12.773483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:13.273179   47783 type.go:168] "Request Body" body=""
	I1213 08:49:13.273252   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:13.273585   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:13.773267   47783 type.go:168] "Request Body" body=""
	I1213 08:49:13.773337   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:13.773602   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:14.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:49:14.273179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:14.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:14.273573   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:14.773254   47783 type.go:168] "Request Body" body=""
	I1213 08:49:14.773326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:14.773613   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:15.272991   47783 type.go:168] "Request Body" body=""
	I1213 08:49:15.273071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:15.273379   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:15.773114   47783 type.go:168] "Request Body" body=""
	I1213 08:49:15.773190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:15.773537   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:16.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:49:16.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:16.273493   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:16.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:49:16.773105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:16.773399   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:16.773452   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:17.272977   47783 type.go:168] "Request Body" body=""
	I1213 08:49:17.273056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:17.273386   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:17.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:49:17.773156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:17.773469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:18.273029   47783 type.go:168] "Request Body" body=""
	I1213 08:49:18.273098   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:18.273417   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:18.773104   47783 type.go:168] "Request Body" body=""
	I1213 08:49:18.773178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:18.773501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:18.773552   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:19.273076   47783 type.go:168] "Request Body" body=""
	I1213 08:49:19.273154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:19.273462   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:19.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:49:19.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:19.773602   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:20.273287   47783 type.go:168] "Request Body" body=""
	I1213 08:49:20.273359   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:20.273692   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:20.773504   47783 type.go:168] "Request Body" body=""
	I1213 08:49:20.773579   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:20.773879   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:20.773925   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:21.273640   47783 type.go:168] "Request Body" body=""
	I1213 08:49:21.273706   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:21.273955   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:21.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:49:21.773767   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:21.774073   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:22.273883   47783 type.go:168] "Request Body" body=""
	I1213 08:49:22.273954   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:22.274296   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:22.772994   47783 type.go:168] "Request Body" body=""
	I1213 08:49:22.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:22.773314   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:23.272983   47783 type.go:168] "Request Body" body=""
	I1213 08:49:23.273054   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:23.273387   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:23.273444   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:23.773118   47783 type.go:168] "Request Body" body=""
	I1213 08:49:23.773191   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:23.773501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:24.273058   47783 type.go:168] "Request Body" body=""
	I1213 08:49:24.273141   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:24.273459   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:24.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:49:24.773148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:24.773494   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:25.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:49:25.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:25.273476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:25.273529   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:25.773465   47783 type.go:168] "Request Body" body=""
	I1213 08:49:25.773532   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:25.773794   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:26.273591   47783 type.go:168] "Request Body" body=""
	I1213 08:49:26.273658   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:26.273978   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:26.351410   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:49:26.407043   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:26.410457   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:26.410551   47783 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:49:26.773965   47783 type.go:168] "Request Body" body=""
	I1213 08:49:26.774065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:26.774374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:27.272953   47783 type.go:168] "Request Body" body=""
	I1213 08:49:27.273025   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:27.273280   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:27.774019   47783 type.go:168] "Request Body" body=""
	I1213 08:49:27.774142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:27.774488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:27.774540   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:28.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:49:28.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:28.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:28.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:49:28.773153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:28.773437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:29.273108   47783 type.go:168] "Request Body" body=""
	I1213 08:49:29.273179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:29.273507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:29.773273   47783 type.go:168] "Request Body" body=""
	I1213 08:49:29.773361   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:29.773684   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:30.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:49:30.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:30.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:30.273487   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:30.773081   47783 type.go:168] "Request Body" body=""
	I1213 08:49:30.773180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:30.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:31.273215   47783 type.go:168] "Request Body" body=""
	I1213 08:49:31.273285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:31.273560   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:31.773891   47783 type.go:168] "Request Body" body=""
	I1213 08:49:31.773957   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:31.774211   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:32.274000   47783 type.go:168] "Request Body" body=""
	I1213 08:49:32.274087   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:32.274384   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:32.274426   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:32.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:49:32.773174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:32.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:33.273961   47783 type.go:168] "Request Body" body=""
	I1213 08:49:33.274039   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:33.274308   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:33.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:49:33.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:33.773455   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:34.273150   47783 type.go:168] "Request Body" body=""
	I1213 08:49:34.273226   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:34.273548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:34.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:49:34.773097   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:34.773360   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:34.773410   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:35.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:49:35.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:35.273467   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:35.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:49:35.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:35.773443   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:36.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:49:36.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:36.273468   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:36.773159   47783 type.go:168] "Request Body" body=""
	I1213 08:49:36.773245   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:36.773626   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:36.773678   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:37.273354   47783 type.go:168] "Request Body" body=""
	I1213 08:49:37.273429   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:37.273763   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:37.773002   47783 type.go:168] "Request Body" body=""
	I1213 08:49:37.773079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:37.773333   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:38.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:49:38.273122   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:38.273453   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:38.773090   47783 type.go:168] "Request Body" body=""
	I1213 08:49:38.773163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:38.773490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:39.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:49:39.273155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:39.273480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:39.273532   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:39.687941   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:49:39.742570   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:39.746037   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:39.746134   47783 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:49:39.749234   47783 out.go:179] * Enabled addons: 
	I1213 08:49:39.751225   47783 addons.go:530] duration metric: took 1m39.988589749s for enable addons: enabled=[]
	I1213 08:49:39.773343   47783 type.go:168] "Request Body" body=""
	I1213 08:49:39.773416   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:39.773726   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:40.273095   47783 type.go:168] "Request Body" body=""
	I1213 08:49:40.273171   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:40.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:40.773067   47783 type.go:168] "Request Body" body=""
	I1213 08:49:40.773147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:40.773426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:41.273150   47783 type.go:168] "Request Body" body=""
	I1213 08:49:41.273226   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:41.273552   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:41.273610   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:41.773119   47783 type.go:168] "Request Body" body=""
	I1213 08:49:41.773205   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:41.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:42.278526   47783 type.go:168] "Request Body" body=""
	I1213 08:49:42.278597   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:42.279069   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:42.773238   47783 type.go:168] "Request Body" body=""
	I1213 08:49:42.773329   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:42.773690   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:43.273405   47783 type.go:168] "Request Body" body=""
	I1213 08:49:43.273484   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:43.273806   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:43.273861   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:43.773568   47783 type.go:168] "Request Body" body=""
	I1213 08:49:43.773638   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:43.773886   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:44.273754   47783 type.go:168] "Request Body" body=""
	I1213 08:49:44.273826   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:44.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:44.773918   47783 type.go:168] "Request Body" body=""
	I1213 08:49:44.773997   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:44.774334   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:45.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:49:45.273137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:45.273511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:45.773242   47783 type.go:168] "Request Body" body=""
	I1213 08:49:45.773316   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:45.773611   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:45.773658   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:46.273330   47783 type.go:168] "Request Body" body=""
	I1213 08:49:46.273412   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:46.273751   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:46.773437   47783 type.go:168] "Request Body" body=""
	I1213 08:49:46.773504   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:46.773768   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:47.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:49:47.273144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:47.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:47.773097   47783 type.go:168] "Request Body" body=""
	I1213 08:49:47.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:47.773505   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:48.273049   47783 type.go:168] "Request Body" body=""
	I1213 08:49:48.273125   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:48.273438   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:48.273501   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:48.773147   47783 type.go:168] "Request Body" body=""
	I1213 08:49:48.773223   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:48.773560   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:49.273284   47783 type.go:168] "Request Body" body=""
	I1213 08:49:49.273357   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:49.273664   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:49.773223   47783 type.go:168] "Request Body" body=""
	I1213 08:49:49.773311   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:49.773649   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:50.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:49:50.273201   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:50.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:50.273544   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:50.773094   47783 type.go:168] "Request Body" body=""
	I1213 08:49:50.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:50.773510   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:51.273186   47783 type.go:168] "Request Body" body=""
	I1213 08:49:51.273280   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:51.273547   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:51.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:49:51.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:51.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:52.273068   47783 type.go:168] "Request Body" body=""
	I1213 08:49:52.273164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:52.273473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:52.773028   47783 type.go:168] "Request Body" body=""
	I1213 08:49:52.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:52.773409   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:52.773465   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:53.273131   47783 type.go:168] "Request Body" body=""
	I1213 08:49:53.273206   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:53.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:53.773165   47783 type.go:168] "Request Body" body=""
	I1213 08:49:53.773244   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:53.773580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:54.273021   47783 type.go:168] "Request Body" body=""
	I1213 08:49:54.273089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:54.273345   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:54.773054   47783 type.go:168] "Request Body" body=""
	I1213 08:49:54.773137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:54.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:54.773578   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:55.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:49:55.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:55.273699   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:55.773529   47783 type.go:168] "Request Body" body=""
	I1213 08:49:55.773602   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:55.773850   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:56.273641   47783 type.go:168] "Request Body" body=""
	I1213 08:49:56.273732   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:56.274042   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:56.773818   47783 type.go:168] "Request Body" body=""
	I1213 08:49:56.773897   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:56.774220   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:56.774270   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:57.273975   47783 type.go:168] "Request Body" body=""
	I1213 08:49:57.274045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:57.274295   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:57.772996   47783 type.go:168] "Request Body" body=""
	I1213 08:49:57.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:57.773405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:58.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:49:58.273173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:58.273494   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:58.773156   47783 type.go:168] "Request Body" body=""
	I1213 08:49:58.773264   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:58.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:59.273323   47783 type.go:168] "Request Body" body=""
	I1213 08:49:59.273404   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:59.273727   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:59.273784   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:59.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:49:59.773775   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:59.774108   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:00.297017   47783 type.go:168] "Request Body" body=""
	I1213 08:50:00.297166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:00.297535   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:00.773617   47783 type.go:168] "Request Body" body=""
	I1213 08:50:00.773751   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:00.774118   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:01.273864   47783 type.go:168] "Request Body" body=""
	I1213 08:50:01.273935   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:01.274197   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:01.274238   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:01.773963   47783 type.go:168] "Request Body" body=""
	I1213 08:50:01.774042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:01.774389   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:02.273108   47783 type.go:168] "Request Body" body=""
	I1213 08:50:02.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:02.273568   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:02.773279   47783 type.go:168] "Request Body" body=""
	I1213 08:50:02.773363   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:02.773686   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:03.273398   47783 type.go:168] "Request Body" body=""
	I1213 08:50:03.273474   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:03.273787   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:03.773099   47783 type.go:168] "Request Body" body=""
	I1213 08:50:03.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:03.773516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:03.773571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:04.273225   47783 type.go:168] "Request Body" body=""
	I1213 08:50:04.273313   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:04.273579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:04.773096   47783 type.go:168] "Request Body" body=""
	I1213 08:50:04.773170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:04.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:05.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:50:05.273153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:05.273466   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:05.773045   47783 type.go:168] "Request Body" body=""
	I1213 08:50:05.773131   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:05.773429   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:06.273084   47783 type.go:168] "Request Body" body=""
	I1213 08:50:06.273161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:06.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:06.273553   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:06.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:50:06.773112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:06.773416   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:07.272971   47783 type.go:168] "Request Body" body=""
	I1213 08:50:07.273045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:07.273298   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:07.773048   47783 type.go:168] "Request Body" body=""
	I1213 08:50:07.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:07.773427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:08.273096   47783 type.go:168] "Request Body" body=""
	I1213 08:50:08.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:08.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:08.773055   47783 type.go:168] "Request Body" body=""
	I1213 08:50:08.773142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:08.773458   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:08.773523   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:09.273174   47783 type.go:168] "Request Body" body=""
	I1213 08:50:09.273248   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:09.273572   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:09.773553   47783 type.go:168] "Request Body" body=""
	I1213 08:50:09.773632   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:09.773992   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:10.273723   47783 type.go:168] "Request Body" body=""
	I1213 08:50:10.273814   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:10.274202   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:10.773083   47783 type.go:168] "Request Body" body=""
	I1213 08:50:10.773168   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:10.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:10.773551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:11.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:50:11.273271   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:11.273600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:11.772991   47783 type.go:168] "Request Body" body=""
	I1213 08:50:11.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:11.773355   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:12.273063   47783 type.go:168] "Request Body" body=""
	I1213 08:50:12.273145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:12.273477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:12.773181   47783 type.go:168] "Request Body" body=""
	I1213 08:50:12.773261   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:12.773608   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:12.773666   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:13.273258   47783 type.go:168] "Request Body" body=""
	I1213 08:50:13.273334   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:13.273612   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:13.773147   47783 type.go:168] "Request Body" body=""
	I1213 08:50:13.773220   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:13.773548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:14.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:50:14.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:14.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:14.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:50:14.773121   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:14.773402   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:15.273165   47783 type.go:168] "Request Body" body=""
	I1213 08:50:15.273259   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:15.273603   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:15.273662   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:15.773104   47783 type.go:168] "Request Body" body=""
	I1213 08:50:15.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:15.773492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:16.273005   47783 type.go:168] "Request Body" body=""
	I1213 08:50:16.273080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:16.273324   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:16.773018   47783 type.go:168] "Request Body" body=""
	I1213 08:50:16.773092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:16.773427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:17.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:17.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:17.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:17.773161   47783 type.go:168] "Request Body" body=""
	I1213 08:50:17.773232   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:17.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:17.773616   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:18.273100   47783 type.go:168] "Request Body" body=""
	I1213 08:50:18.273173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:18.273458   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:18.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:50:18.773177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:18.773525   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:19.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:50:19.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:19.273451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:19.773113   47783 type.go:168] "Request Body" body=""
	I1213 08:50:19.773187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:19.773533   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:20.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:50:20.273111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:20.273432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:20.273484   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:20.773059   47783 type.go:168] "Request Body" body=""
	I1213 08:50:20.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:20.773472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:21.273123   47783 type.go:168] "Request Body" body=""
	I1213 08:50:21.273195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:21.273492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:21.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:50:21.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:21.773514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:22.273055   47783 type.go:168] "Request Body" body=""
	I1213 08:50:22.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:22.273427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:22.773171   47783 type.go:168] "Request Body" body=""
	I1213 08:50:22.773255   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:22.773587   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:22.773638   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:23.273116   47783 type.go:168] "Request Body" body=""
	I1213 08:50:23.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:23.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:23.773176   47783 type.go:168] "Request Body" body=""
	I1213 08:50:23.773242   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:23.773565   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:24.273113   47783 type.go:168] "Request Body" body=""
	I1213 08:50:24.273208   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:24.273526   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:24.773266   47783 type.go:168] "Request Body" body=""
	I1213 08:50:24.773335   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:24.773645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:24.773691   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:25.273185   47783 type.go:168] "Request Body" body=""
	I1213 08:50:25.273256   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:25.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:25.773486   47783 type.go:168] "Request Body" body=""
	I1213 08:50:25.773580   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:25.773863   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:26.273638   47783 type.go:168] "Request Body" body=""
	I1213 08:50:26.273722   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:26.274046   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:26.773772   47783 type.go:168] "Request Body" body=""
	I1213 08:50:26.773849   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:26.774110   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:26.774158   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:27.273949   47783 type.go:168] "Request Body" body=""
	I1213 08:50:27.274035   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:27.274426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:27.773079   47783 type.go:168] "Request Body" body=""
	I1213 08:50:27.773157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:27.773492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:28.273060   47783 type.go:168] "Request Body" body=""
	I1213 08:50:28.273130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:28.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:28.773138   47783 type.go:168] "Request Body" body=""
	I1213 08:50:28.773213   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:28.773534   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:29.273243   47783 type.go:168] "Request Body" body=""
	I1213 08:50:29.273332   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:29.273667   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:29.273721   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:29.773419   47783 type.go:168] "Request Body" body=""
	I1213 08:50:29.773492   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:29.773757   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:30.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:50:30.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:30.273528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:30.773268   47783 type.go:168] "Request Body" body=""
	I1213 08:50:30.773342   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:30.773681   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:31.273053   47783 type.go:168] "Request Body" body=""
	I1213 08:50:31.273131   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:31.273450   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:31.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:50:31.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:31.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:31.773554   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:32.273178   47783 type.go:168] "Request Body" body=""
	I1213 08:50:32.273254   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:32.273550   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:32.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:50:32.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:32.773420   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:33.273126   47783 type.go:168] "Request Body" body=""
	I1213 08:50:33.273197   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:33.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:33.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:50:33.773311   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:33.773638   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:33.773693   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:34.273120   47783 type.go:168] "Request Body" body=""
	I1213 08:50:34.273192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:34.273450   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:34.773092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:34.773162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:34.773483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:35.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:50:35.273285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:35.273659   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:35.773467   47783 type.go:168] "Request Body" body=""
	I1213 08:50:35.773539   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:35.773795   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:35.773847   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:36.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:36.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:36.273501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:36.773198   47783 type.go:168] "Request Body" body=""
	I1213 08:50:36.773272   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:36.773631   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:37.273322   47783 type.go:168] "Request Body" body=""
	I1213 08:50:37.273394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:37.273649   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:37.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:50:37.773139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:37.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:38.273158   47783 type.go:168] "Request Body" body=""
	I1213 08:50:38.273235   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:38.273563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:38.273617   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:38.773952   47783 type.go:168] "Request Body" body=""
	I1213 08:50:38.774033   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:38.774277   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:39.272962   47783 type.go:168] "Request Body" body=""
	I1213 08:50:39.273042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:39.273376   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:39.773154   47783 type.go:168] "Request Body" body=""
	I1213 08:50:39.773228   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:39.773559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:40.273228   47783 type.go:168] "Request Body" body=""
	I1213 08:50:40.273301   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:40.273580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:40.773572   47783 type.go:168] "Request Body" body=""
	I1213 08:50:40.773643   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:40.773972   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:40.774022   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:41.273745   47783 type.go:168] "Request Body" body=""
	I1213 08:50:41.273822   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:41.274145   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:41.773824   47783 type.go:168] "Request Body" body=""
	I1213 08:50:41.773902   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:41.774153   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:42.273992   47783 type.go:168] "Request Body" body=""
	I1213 08:50:42.274071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:42.274419   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:42.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:50:42.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:42.773457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:43.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:50:43.273175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:43.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:43.273477   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:43.773102   47783 type.go:168] "Request Body" body=""
	I1213 08:50:43.773177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:43.773521   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:44.273220   47783 type.go:168] "Request Body" body=""
	I1213 08:50:44.273315   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:44.273660   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:44.773011   47783 type.go:168] "Request Body" body=""
	I1213 08:50:44.773097   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:44.773359   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:45.273045   47783 type.go:168] "Request Body" body=""
	I1213 08:50:45.273133   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:45.273474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:45.273530   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:45.773107   47783 type.go:168] "Request Body" body=""
	I1213 08:50:45.773184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:45.773544   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:46.273227   47783 type.go:168] "Request Body" body=""
	I1213 08:50:46.273297   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:46.273559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:46.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:50:46.773135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:46.773469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:47.273151   47783 type.go:168] "Request Body" body=""
	I1213 08:50:47.273234   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:47.273574   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:47.273628   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:47.773031   47783 type.go:168] "Request Body" body=""
	I1213 08:50:47.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:47.773365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:48.273067   47783 type.go:168] "Request Body" body=""
	I1213 08:50:48.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:48.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:48.773210   47783 type.go:168] "Request Body" body=""
	I1213 08:50:48.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:48.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:49.272997   47783 type.go:168] "Request Body" body=""
	I1213 08:50:49.273065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:49.273322   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:49.773187   47783 type.go:168] "Request Body" body=""
	I1213 08:50:49.773256   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:49.773595   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:49.773649   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:50.273317   47783 type.go:168] "Request Body" body=""
	I1213 08:50:50.273391   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:50.273716   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:50.773448   47783 type.go:168] "Request Body" body=""
	I1213 08:50:50.773521   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:50.773785   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:51.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:50:51.273153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:51.273469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:51.773185   47783 type.go:168] "Request Body" body=""
	I1213 08:50:51.773266   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:51.773606   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:52.273034   47783 type.go:168] "Request Body" body=""
	I1213 08:50:52.273104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:52.273399   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:52.273451   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:52.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:50:52.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:52.773490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:53.273950   47783 type.go:168] "Request Body" body=""
	I1213 08:50:53.274028   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:53.274337   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:53.772992   47783 type.go:168] "Request Body" body=""
	I1213 08:50:53.773065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:53.773378   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:54.272967   47783 type.go:168] "Request Body" body=""
	I1213 08:50:54.273044   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:54.273396   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:54.773129   47783 type.go:168] "Request Body" body=""
	I1213 08:50:54.773203   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:54.773528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:54.773588   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:55.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:50:55.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:55.273415   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:55.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:50:55.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:55.773463   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:56.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:50:56.273156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:56.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:56.772978   47783 type.go:168] "Request Body" body=""
	I1213 08:50:56.773045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:56.773290   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:57.272974   47783 type.go:168] "Request Body" body=""
	I1213 08:50:57.273048   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:57.273374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:57.273428   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:57.773002   47783 type.go:168] "Request Body" body=""
	I1213 08:50:57.773088   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:57.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:58.273022   47783 type.go:168] "Request Body" body=""
	I1213 08:50:58.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:58.273391   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:58.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:50:58.773196   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:58.773519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:59.273246   47783 type.go:168] "Request Body" body=""
	I1213 08:50:59.273324   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:59.273653   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:59.273705   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:59.773429   47783 type.go:168] "Request Body" body=""
	I1213 08:50:59.773497   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:59.773750   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:00.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:51:00.273284   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:00.273609   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:00.773677   47783 type.go:168] "Request Body" body=""
	I1213 08:51:00.773767   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:00.774124   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:01.273906   47783 type.go:168] "Request Body" body=""
	I1213 08:51:01.273981   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:01.274248   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:01.274298   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:01.772982   47783 type.go:168] "Request Body" body=""
	I1213 08:51:01.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:01.773396   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:02.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:51:02.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:02.273510   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:02.773051   47783 type.go:168] "Request Body" body=""
	I1213 08:51:02.773122   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:02.773441   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:03.273077   47783 type.go:168] "Request Body" body=""
	I1213 08:51:03.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:03.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:03.773197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:03.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:03.773610   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:03.773666   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:04.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:51:04.273145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:04.273469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:04.773150   47783 type.go:168] "Request Body" body=""
	I1213 08:51:04.773225   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:04.773542   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:05.273267   47783 type.go:168] "Request Body" body=""
	I1213 08:51:05.273342   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:05.273694   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:05.773534   47783 type.go:168] "Request Body" body=""
	I1213 08:51:05.773600   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:05.773861   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:05.773901   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:06.273627   47783 type.go:168] "Request Body" body=""
	I1213 08:51:06.273698   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:06.273995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:06.773786   47783 type.go:168] "Request Body" body=""
	I1213 08:51:06.773858   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:06.774165   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:07.273877   47783 type.go:168] "Request Body" body=""
	I1213 08:51:07.273959   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:07.274221   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:07.774001   47783 type.go:168] "Request Body" body=""
	I1213 08:51:07.774080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:07.774408   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:07.774461   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:08.273096   47783 type.go:168] "Request Body" body=""
	I1213 08:51:08.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:08.273511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:08.773065   47783 type.go:168] "Request Body" body=""
	I1213 08:51:08.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:08.773665   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:09.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:51:09.273148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:09.273474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:09.773196   47783 type.go:168] "Request Body" body=""
	I1213 08:51:09.773269   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:09.773600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:10.273256   47783 type.go:168] "Request Body" body=""
	I1213 08:51:10.273325   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:10.273572   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:10.273611   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:10.773469   47783 type.go:168] "Request Body" body=""
	I1213 08:51:10.773540   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:10.773888   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:11.273704   47783 type.go:168] "Request Body" body=""
	I1213 08:51:11.273777   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:11.274082   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:11.773860   47783 type.go:168] "Request Body" body=""
	I1213 08:51:11.773926   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:11.774166   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:12.273909   47783 type.go:168] "Request Body" body=""
	I1213 08:51:12.273991   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:12.274305   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:12.274363   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:12.773010   47783 type.go:168] "Request Body" body=""
	I1213 08:51:12.773086   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:12.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:13.272974   47783 type.go:168] "Request Body" body=""
	I1213 08:51:13.273048   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:13.273299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:13.774032   47783 type.go:168] "Request Body" body=""
	I1213 08:51:13.774103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:13.774412   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:14.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:51:14.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:14.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:14.773021   47783 type.go:168] "Request Body" body=""
	I1213 08:51:14.773090   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:14.773394   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:14.773446   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:15.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:51:15.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:15.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:15.773082   47783 type.go:168] "Request Body" body=""
	I1213 08:51:15.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:15.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:16.273178   47783 type.go:168] "Request Body" body=""
	I1213 08:51:16.273248   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:16.273492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:16.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:16.773179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:16.773512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:16.773564   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:17.273224   47783 type.go:168] "Request Body" body=""
	I1213 08:51:17.273307   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:17.273601   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:17.773254   47783 type.go:168] "Request Body" body=""
	I1213 08:51:17.773319   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:17.773610   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:18.273315   47783 type.go:168] "Request Body" body=""
	I1213 08:51:18.273399   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:18.273714   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:18.773100   47783 type.go:168] "Request Body" body=""
	I1213 08:51:18.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:18.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:18.773613   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:19.273057   47783 type.go:168] "Request Body" body=""
	I1213 08:51:19.273134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:19.273422   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:19.773326   47783 type.go:168] "Request Body" body=""
	I1213 08:51:19.773408   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:19.773817   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:20.273652   47783 type.go:168] "Request Body" body=""
	I1213 08:51:20.273726   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:20.274023   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:20.773908   47783 type.go:168] "Request Body" body=""
	I1213 08:51:20.773981   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:20.774242   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:20.774291   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:21.272987   47783 type.go:168] "Request Body" body=""
	I1213 08:51:21.273071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:21.273418   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:21.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:51:21.773224   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:21.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:22.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:51:22.273110   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:22.273385   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:22.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:22.773167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:22.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:23.273121   47783 type.go:168] "Request Body" body=""
	I1213 08:51:23.273201   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:23.273516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:23.273571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:23.773217   47783 type.go:168] "Request Body" body=""
	I1213 08:51:23.773286   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:23.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:24.273100   47783 type.go:168] "Request Body" body=""
	I1213 08:51:24.273174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:24.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:24.773255   47783 type.go:168] "Request Body" body=""
	I1213 08:51:24.773333   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:24.773667   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:25.273326   47783 type.go:168] "Request Body" body=""
	I1213 08:51:25.273394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:25.273641   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:25.273680   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:25.773681   47783 type.go:168] "Request Body" body=""
	I1213 08:51:25.773757   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:25.774066   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:26.273874   47783 type.go:168] "Request Body" body=""
	I1213 08:51:26.273947   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:26.274273   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:26.772979   47783 type.go:168] "Request Body" body=""
	I1213 08:51:26.773047   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:26.773307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:27.273109   47783 type.go:168] "Request Body" body=""
	I1213 08:51:27.273206   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:27.273597   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:27.773321   47783 type.go:168] "Request Body" body=""
	I1213 08:51:27.773394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:27.773719   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:27.773778   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:28.273019   47783 type.go:168] "Request Body" body=""
	I1213 08:51:28.273092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:28.273421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:28.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:51:28.773172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:28.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:29.273187   47783 type.go:168] "Request Body" body=""
	I1213 08:51:29.273261   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:29.273583   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:29.773457   47783 type.go:168] "Request Body" body=""
	I1213 08:51:29.773543   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:29.773812   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:29.773863   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:30.273575   47783 type.go:168] "Request Body" body=""
	I1213 08:51:30.273663   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:30.273957   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:30.773908   47783 type.go:168] "Request Body" body=""
	I1213 08:51:30.773991   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:30.774329   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:31.273977   47783 type.go:168] "Request Body" body=""
	I1213 08:51:31.274058   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:31.274309   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:31.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:51:31.773126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:31.773428   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:32.273147   47783 type.go:168] "Request Body" body=""
	I1213 08:51:32.273218   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:32.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:32.273546   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:32.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:51:32.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:32.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:33.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:51:33.273171   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:33.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:33.773083   47783 type.go:168] "Request Body" body=""
	I1213 08:51:33.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:33.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:34.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:51:34.273124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:34.273384   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:34.773086   47783 type.go:168] "Request Body" body=""
	I1213 08:51:34.773216   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:34.773545   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:34.773602   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:35.273264   47783 type.go:168] "Request Body" body=""
	I1213 08:51:35.273339   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:35.273673   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:35.773675   47783 type.go:168] "Request Body" body=""
	I1213 08:51:35.773742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:35.773995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:36.273812   47783 type.go:168] "Request Body" body=""
	I1213 08:51:36.273886   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:36.274208   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:36.773984   47783 type.go:168] "Request Body" body=""
	I1213 08:51:36.774056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:36.774351   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:36.774398   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:37.273032   47783 type.go:168] "Request Body" body=""
	I1213 08:51:37.273100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:37.273364   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:37.773071   47783 type.go:168] "Request Body" body=""
	I1213 08:51:37.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:37.773499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:38.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:51:38.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:38.273480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:38.773071   47783 type.go:168] "Request Body" body=""
	I1213 08:51:38.773150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:38.773445   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:39.273066   47783 type.go:168] "Request Body" body=""
	I1213 08:51:39.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:39.273476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:39.273529   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:39.773199   47783 type.go:168] "Request Body" body=""
	I1213 08:51:39.773278   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:39.773580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:40.273030   47783 type.go:168] "Request Body" body=""
	I1213 08:51:40.273100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:40.273392   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:40.773204   47783 type.go:168] "Request Body" body=""
	I1213 08:51:40.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:40.773647   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:41.273380   47783 type.go:168] "Request Body" body=""
	I1213 08:51:41.273453   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:41.273801   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:41.273854   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:41.773592   47783 type.go:168] "Request Body" body=""
	I1213 08:51:41.773662   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:41.773929   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:42.273710   47783 type.go:168] "Request Body" body=""
	I1213 08:51:42.273789   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:42.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:42.773753   47783 type.go:168] "Request Body" body=""
	I1213 08:51:42.773827   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:42.774140   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:43.273914   47783 type.go:168] "Request Body" body=""
	I1213 08:51:43.273992   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:43.274262   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:43.274314   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:43.774012   47783 type.go:168] "Request Body" body=""
	I1213 08:51:43.774089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:43.774435   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:44.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:51:44.273110   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:44.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:44.773038   47783 type.go:168] "Request Body" body=""
	I1213 08:51:44.773145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:44.773485   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:45.273088   47783 type.go:168] "Request Body" body=""
	I1213 08:51:45.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:45.273517   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:45.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:51:45.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:45.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:45.773548   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:46.273137   47783 type.go:168] "Request Body" body=""
	I1213 08:51:46.273200   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:46.273447   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:46.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:51:46.773198   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:46.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:47.273202   47783 type.go:168] "Request Body" body=""
	I1213 08:51:47.273291   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:47.273588   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:47.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:51:47.773134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:47.773395   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:48.273117   47783 type.go:168] "Request Body" body=""
	I1213 08:51:48.273197   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:48.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:48.273569   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:48.773253   47783 type.go:168] "Request Body" body=""
	I1213 08:51:48.773339   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:48.773688   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:49.273385   47783 type.go:168] "Request Body" body=""
	I1213 08:51:49.273462   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:49.273726   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:49.773610   47783 type.go:168] "Request Body" body=""
	I1213 08:51:49.773679   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:49.773995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:50.273790   47783 type.go:168] "Request Body" body=""
	I1213 08:51:50.273866   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:50.274187   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:50.274243   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:50.773061   47783 type.go:168] "Request Body" body=""
	I1213 08:51:50.773136   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:50.773466   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:51.273093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:51.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:51.273471   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:51.773060   47783 type.go:168] "Request Body" body=""
	I1213 08:51:51.773138   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:51.773445   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:52.273028   47783 type.go:168] "Request Body" body=""
	I1213 08:51:52.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:52.273335   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:52.773020   47783 type.go:168] "Request Body" body=""
	I1213 08:51:52.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:52.773428   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:52.773493   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:53.273197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:53.273277   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:53.273626   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:53.773306   47783 type.go:168] "Request Body" body=""
	I1213 08:51:53.773373   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:53.773624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:54.273103   47783 type.go:168] "Request Body" body=""
	I1213 08:51:54.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:54.273587   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:54.773074   47783 type.go:168] "Request Body" body=""
	I1213 08:51:54.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:54.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:54.773556   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:55.273197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:55.273270   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:55.273520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:55.773075   47783 type.go:168] "Request Body" body=""
	I1213 08:51:55.773149   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:55.773705   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:56.273383   47783 type.go:168] "Request Body" body=""
	I1213 08:51:56.273474   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:56.274426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:56.772963   47783 type.go:168] "Request Body" body=""
	I1213 08:51:56.773031   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:56.773288   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:57.272979   47783 type.go:168] "Request Body" body=""
	I1213 08:51:57.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:57.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:57.273481   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:57.773116   47783 type.go:168] "Request Body" body=""
	I1213 08:51:57.773193   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:57.773526   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:58.273042   47783 type.go:168] "Request Body" body=""
	I1213 08:51:58.273108   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:58.273456   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:58.773114   47783 type.go:168] "Request Body" body=""
	I1213 08:51:58.773188   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:58.773503   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:59.273098   47783 type.go:168] "Request Body" body=""
	I1213 08:51:59.273220   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:59.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:59.273585   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:59.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:51:59.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:59.773540   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:00.273530   47783 type.go:168] "Request Body" body=""
	I1213 08:52:00.273627   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:00.274021   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:00.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:52:00.773167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:00.773522   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:01.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:52:01.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:01.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:01.773057   47783 type.go:168] "Request Body" body=""
	I1213 08:52:01.773134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:01.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:01.773527   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:02.273229   47783 type.go:168] "Request Body" body=""
	I1213 08:52:02.273310   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:02.273637   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:02.773212   47783 type.go:168] "Request Body" body=""
	I1213 08:52:02.773280   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:02.773548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:03.273115   47783 type.go:168] "Request Body" body=""
	I1213 08:52:03.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:03.273565   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:03.773267   47783 type.go:168] "Request Body" body=""
	I1213 08:52:03.773352   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:03.773695   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:03.773772   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:04.273021   47783 type.go:168] "Request Body" body=""
	I1213 08:52:04.273090   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:04.273426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:04.773087   47783 type.go:168] "Request Body" body=""
	I1213 08:52:04.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:04.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:05.273162   47783 type.go:168] "Request Body" body=""
	I1213 08:52:05.273242   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:05.273562   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:05.773515   47783 type.go:168] "Request Body" body=""
	I1213 08:52:05.773585   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:05.773843   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:05.773884   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:06.273680   47783 type.go:168] "Request Body" body=""
	I1213 08:52:06.273755   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:06.274063   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:06.773854   47783 type.go:168] "Request Body" body=""
	I1213 08:52:06.773929   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:06.774259   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:07.274012   47783 type.go:168] "Request Body" body=""
	I1213 08:52:07.274080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:07.274341   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:07.773027   47783 type.go:168] "Request Body" body=""
	I1213 08:52:07.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:07.773425   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:08.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:52:08.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:08.273531   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:08.273588   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:08.773079   47783 type.go:168] "Request Body" body=""
	I1213 08:52:08.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:08.773463   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:09.273105   47783 type.go:168] "Request Body" body=""
	I1213 08:52:09.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:09.273541   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:09.773271   47783 type.go:168] "Request Body" body=""
	I1213 08:52:09.773343   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:09.773683   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:10.273235   47783 type.go:168] "Request Body" body=""
	I1213 08:52:10.273308   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:10.273623   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:10.273686   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:10.773580   47783 type.go:168] "Request Body" body=""
	I1213 08:52:10.773661   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:10.773990   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:11.273663   47783 type.go:168] "Request Body" body=""
	I1213 08:52:11.273742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:11.274065   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:11.773833   47783 type.go:168] "Request Body" body=""
	I1213 08:52:11.773902   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:11.774166   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:12.273942   47783 type.go:168] "Request Body" body=""
	I1213 08:52:12.274016   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:12.274348   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:12.274403   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:12.774024   47783 type.go:168] "Request Body" body=""
	I1213 08:52:12.774098   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:12.774431   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:13.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:13.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:13.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:13.773196   47783 type.go:168] "Request Body" body=""
	I1213 08:52:13.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:13.773598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:14.273324   47783 type.go:168] "Request Body" body=""
	I1213 08:52:14.273404   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:14.273738   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:14.773419   47783 type.go:168] "Request Body" body=""
	I1213 08:52:14.773491   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:14.773809   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:14.773870   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:15.273604   47783 type.go:168] "Request Body" body=""
	I1213 08:52:15.273678   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:15.273970   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:15.773929   47783 type.go:168] "Request Body" body=""
	I1213 08:52:15.774005   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:15.774334   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:16.273966   47783 type.go:168] "Request Body" body=""
	I1213 08:52:16.274045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:16.274328   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:16.773036   47783 type.go:168] "Request Body" body=""
	I1213 08:52:16.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:16.773432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:17.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:52:17.273192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:17.273578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:17.273639   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:17.772950   47783 type.go:168] "Request Body" body=""
	I1213 08:52:17.773025   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:17.773278   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:18.274027   47783 type.go:168] "Request Body" body=""
	I1213 08:52:18.274101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:18.274430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:18.772996   47783 type.go:168] "Request Body" body=""
	I1213 08:52:18.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:18.773402   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:19.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:52:19.273152   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:19.273456   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:19.773356   47783 type.go:168] "Request Body" body=""
	I1213 08:52:19.773432   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:19.773764   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:19.773818   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:20.273485   47783 type.go:168] "Request Body" body=""
	I1213 08:52:20.273567   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:20.273890   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:20.773865   47783 type.go:168] "Request Body" body=""
	I1213 08:52:20.773932   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:20.774231   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:21.273999   47783 type.go:168] "Request Body" body=""
	I1213 08:52:21.274069   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:21.274395   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:21.772998   47783 type.go:168] "Request Body" body=""
	I1213 08:52:21.773076   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:21.773386   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:22.273028   47783 type.go:168] "Request Body" body=""
	I1213 08:52:22.273101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:22.273413   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:22.273462   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:22.773146   47783 type.go:168] "Request Body" body=""
	I1213 08:52:22.773221   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:22.773559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:23.273239   47783 type.go:168] "Request Body" body=""
	I1213 08:52:23.273317   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:23.273630   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:23.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:52:23.773089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:23.773346   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:24.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:52:24.273175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:24.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:24.273551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:24.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:52:24.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:24.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:25.273040   47783 type.go:168] "Request Body" body=""
	I1213 08:52:25.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:25.273374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:25.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:52:25.773304   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:25.773625   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:26.273113   47783 type.go:168] "Request Body" body=""
	I1213 08:52:26.273187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:26.273523   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:26.273583   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:26.773246   47783 type.go:168] "Request Body" body=""
	I1213 08:52:26.773317   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:26.773577   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:27.273252   47783 type.go:168] "Request Body" body=""
	I1213 08:52:27.273326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:27.273645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:27.773350   47783 type.go:168] "Request Body" body=""
	I1213 08:52:27.773425   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:27.773742   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:28.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:52:28.273167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:28.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:28.773080   47783 type.go:168] "Request Body" body=""
	I1213 08:52:28.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:28.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:28.773541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:29.273204   47783 type.go:168] "Request Body" body=""
	I1213 08:52:29.273284   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:29.273624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:29.773318   47783 type.go:168] "Request Body" body=""
	I1213 08:52:29.773387   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:29.773650   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:30.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:30.273174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:30.273487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:30.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:52:30.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:30.773484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:31.273046   47783 type.go:168] "Request Body" body=""
	I1213 08:52:31.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:31.273373   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:31.273414   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:31.773116   47783 type.go:168] "Request Body" body=""
	I1213 08:52:31.773190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:31.773520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:32.273103   47783 type.go:168] "Request Body" body=""
	I1213 08:52:32.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:32.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:32.773017   47783 type.go:168] "Request Body" body=""
	I1213 08:52:32.773087   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:32.773337   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:33.273004   47783 type.go:168] "Request Body" body=""
	I1213 08:52:33.273082   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:33.273417   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:33.273472   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:33.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:52:33.773088   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:33.773405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:34.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:52:34.273120   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:34.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:34.773152   47783 type.go:168] "Request Body" body=""
	I1213 08:52:34.773227   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:34.773558   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:35.273270   47783 type.go:168] "Request Body" body=""
	I1213 08:52:35.273350   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:35.273680   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:35.273739   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:35.773621   47783 type.go:168] "Request Body" body=""
	I1213 08:52:35.773688   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:35.773944   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:36.273767   47783 type.go:168] "Request Body" body=""
	I1213 08:52:36.273909   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:36.274226   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:36.773955   47783 type.go:168] "Request Body" body=""
	I1213 08:52:36.774030   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:36.774359   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:37.273017   47783 type.go:168] "Request Body" body=""
	I1213 08:52:37.273085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:37.273361   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:37.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:52:37.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:37.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:37.773513   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:38.273181   47783 type.go:168] "Request Body" body=""
	I1213 08:52:38.273262   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:38.273612   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:38.773291   47783 type.go:168] "Request Body" body=""
	I1213 08:52:38.773360   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:38.773665   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:39.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:52:39.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:39.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:39.773368   47783 type.go:168] "Request Body" body=""
	I1213 08:52:39.773449   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:39.773774   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:39.773830   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:40.273560   47783 type.go:168] "Request Body" body=""
	I1213 08:52:40.273630   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:40.273887   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:40.773805   47783 type.go:168] "Request Body" body=""
	I1213 08:52:40.773877   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:40.774208   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:41.273838   47783 type.go:168] "Request Body" body=""
	I1213 08:52:41.273922   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:41.274250   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:41.774001   47783 type.go:168] "Request Body" body=""
	I1213 08:52:41.774079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:41.774332   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:41.774381   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:42.273093   47783 type.go:168] "Request Body" body=""
	I1213 08:52:42.273177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:42.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:42.773228   47783 type.go:168] "Request Body" body=""
	I1213 08:52:42.773303   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:42.773598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:43.273031   47783 type.go:168] "Request Body" body=""
	I1213 08:52:43.273104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:43.273352   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:43.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:52:43.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:43.773424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:44.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:44.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:44.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:44.273560   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:44.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:52:44.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:44.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:45.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:52:45.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:45.273519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:45.773133   47783 type.go:168] "Request Body" body=""
	I1213 08:52:45.773210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:45.773516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:46.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:52:46.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:46.273568   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:46.273635   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:46.773088   47783 type.go:168] "Request Body" body=""
	I1213 08:52:46.773186   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:46.773520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:47.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:52:47.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:47.273491   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:47.774025   47783 type.go:168] "Request Body" body=""
	I1213 08:52:47.774092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:47.774383   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:48.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:52:48.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:48.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:48.773232   47783 type.go:168] "Request Body" body=""
	I1213 08:52:48.773322   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:48.773644   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:48.773698   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:49.273023   47783 type.go:168] "Request Body" body=""
	I1213 08:52:49.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:49.273414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:49.773139   47783 type.go:168] "Request Body" body=""
	I1213 08:52:49.773208   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:49.773528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:50.273274   47783 type.go:168] "Request Body" body=""
	I1213 08:52:50.273347   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:50.273702   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:50.773600   47783 type.go:168] "Request Body" body=""
	I1213 08:52:50.773672   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:50.773927   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:50.773976   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:51.273754   47783 type.go:168] "Request Body" body=""
	I1213 08:52:51.273831   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:51.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:51.773932   47783 type.go:168] "Request Body" body=""
	I1213 08:52:51.774002   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:51.774339   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:52.273970   47783 type.go:168] "Request Body" body=""
	I1213 08:52:52.274046   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:52.274330   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:52.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:52:52.773127   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:52.773475   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:53.273012   47783 type.go:168] "Request Body" body=""
	I1213 08:52:53.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:53.273405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:53.273464   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:53.773963   47783 type.go:168] "Request Body" body=""
	I1213 08:52:53.774030   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:53.774299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:54.272999   47783 type.go:168] "Request Body" body=""
	I1213 08:52:54.273074   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:54.273401   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:54.773125   47783 type.go:168] "Request Body" body=""
	I1213 08:52:54.773203   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:54.773544   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:55.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:52:55.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:55.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:55.773100   47783 type.go:168] "Request Body" body=""
	I1213 08:52:55.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:55.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:55.773542   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:56.273181   47783 type.go:168] "Request Body" body=""
	I1213 08:52:56.273253   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:56.273578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:56.773148   47783 type.go:168] "Request Body" body=""
	I1213 08:52:56.773218   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:56.773518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:57.273086   47783 type.go:168] "Request Body" body=""
	I1213 08:52:57.273163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:57.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:57.773089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:57.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:57.773480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:58.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:52:58.273119   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:58.273451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:58.273511   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:58.773075   47783 type.go:168] "Request Body" body=""
	I1213 08:52:58.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:58.773467   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:59.273192   47783 type.go:168] "Request Body" body=""
	I1213 08:52:59.273273   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:59.273605   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:59.773326   47783 type.go:168] "Request Body" body=""
	I1213 08:52:59.773396   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:59.773672   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:00.273172   47783 type.go:168] "Request Body" body=""
	I1213 08:53:00.273263   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:00.273742   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:00.273809   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:00.773616   47783 type.go:168] "Request Body" body=""
	I1213 08:53:00.773702   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:00.774032   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:01.273744   47783 type.go:168] "Request Body" body=""
	I1213 08:53:01.273814   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:01.274061   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:01.773849   47783 type.go:168] "Request Body" body=""
	I1213 08:53:01.773919   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:01.774245   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:02.272981   47783 type.go:168] "Request Body" body=""
	I1213 08:53:02.273054   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:02.273367   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:02.773046   47783 type.go:168] "Request Body" body=""
	I1213 08:53:02.773118   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:02.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:02.773469   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:03.273079   47783 type.go:168] "Request Body" body=""
	I1213 08:53:03.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:03.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:03.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:53:03.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:03.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:04.273172   47783 type.go:168] "Request Body" body=""
	I1213 08:53:04.273245   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:04.273563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:04.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:04.773147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:04.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:04.773531   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:05.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:53:05.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:05.273471   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:05.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:53:05.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:05.773436   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:06.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:53:06.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:06.273509   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:06.773214   47783 type.go:168] "Request Body" body=""
	I1213 08:53:06.773296   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:06.773613   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:06.773678   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:07.273027   47783 type.go:168] "Request Body" body=""
	I1213 08:53:07.273103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:07.273397   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:07.773074   47783 type.go:168] "Request Body" body=""
	I1213 08:53:07.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:07.773484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:08.273184   47783 type.go:168] "Request Body" body=""
	I1213 08:53:08.273274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:08.273603   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:08.773033   47783 type.go:168] "Request Body" body=""
	I1213 08:53:08.773101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:08.773392   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:09.273719   47783 type.go:168] "Request Body" body=""
	I1213 08:53:09.273791   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:09.274102   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:09.274155   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:09.773020   47783 type.go:168] "Request Body" body=""
	I1213 08:53:09.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:09.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:10.273187   47783 type.go:168] "Request Body" body=""
	I1213 08:53:10.273265   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:10.273558   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:10.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:53:10.773290   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:10.773639   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:11.273348   47783 type.go:168] "Request Body" body=""
	I1213 08:53:11.273428   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:11.273762   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:11.773332   47783 type.go:168] "Request Body" body=""
	I1213 08:53:11.773399   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:11.773706   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:11.773757   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:12.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:53:12.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:12.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:12.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:12.773181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:12.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:13.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:13.273148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:13.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:13.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:53:13.773154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:13.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:14.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:53:14.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:14.273486   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:14.273541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:14.773070   47783 type.go:168] "Request Body" body=""
	I1213 08:53:14.773144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:14.773470   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:15.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:53:15.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:15.273487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:15.773087   47783 type.go:168] "Request Body" body=""
	I1213 08:53:15.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:15.773478   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:16.272941   47783 type.go:168] "Request Body" body=""
	I1213 08:53:16.273007   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:16.273251   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:16.774037   47783 type.go:168] "Request Body" body=""
	I1213 08:53:16.774124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:16.774507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:16.774560   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:17.273219   47783 type.go:168] "Request Body" body=""
	I1213 08:53:17.273292   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:17.273621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:17.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:53:17.773095   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:17.773354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:18.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:18.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:18.273472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:18.773188   47783 type.go:168] "Request Body" body=""
	I1213 08:53:18.773268   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:18.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:19.272964   47783 type.go:168] "Request Body" body=""
	I1213 08:53:19.273032   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:19.273352   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:19.273401   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:19.773060   47783 type.go:168] "Request Body" body=""
	I1213 08:53:19.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:19.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:20.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:53:20.273152   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:20.273506   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:20.773068   47783 type.go:168] "Request Body" body=""
	I1213 08:53:20.773137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:20.773437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:21.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:53:21.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:21.273479   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:21.273526   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:21.773202   47783 type.go:168] "Request Body" body=""
	I1213 08:53:21.773283   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:21.773621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:22.273008   47783 type.go:168] "Request Body" body=""
	I1213 08:53:22.273079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:22.273390   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:22.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:53:22.773136   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:22.773465   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:23.273163   47783 type.go:168] "Request Body" body=""
	I1213 08:53:23.273237   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:23.273573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:23.273630   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:23.772972   47783 type.go:168] "Request Body" body=""
	I1213 08:53:23.773041   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:23.773344   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:24.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:53:24.273160   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:24.273485   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:24.773194   47783 type.go:168] "Request Body" body=""
	I1213 08:53:24.773266   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:24.773573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:25.273008   47783 type.go:168] "Request Body" body=""
	I1213 08:53:25.273080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:25.273326   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:25.773135   47783 type.go:168] "Request Body" body=""
	I1213 08:53:25.773214   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:25.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:25.773610   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:26.273236   47783 type.go:168] "Request Body" body=""
	I1213 08:53:26.273334   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:26.273645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:26.773015   47783 type.go:168] "Request Body" body=""
	I1213 08:53:26.773082   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:26.773414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:27.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:27.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:27.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:27.773212   47783 type.go:168] "Request Body" body=""
	I1213 08:53:27.773288   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:27.773611   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:27.773662   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:28.273082   47783 type.go:168] "Request Body" body=""
	I1213 08:53:28.273158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:28.273448   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:28.773085   47783 type.go:168] "Request Body" body=""
	I1213 08:53:28.773154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:28.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:29.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:53:29.273196   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:29.273483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:29.773239   47783 type.go:168] "Request Body" body=""
	I1213 08:53:29.773318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:29.773573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:30.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:53:30.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:30.273542   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:30.273593   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:30.773082   47783 type.go:168] "Request Body" body=""
	I1213 08:53:30.773174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:30.773474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:31.273124   47783 type.go:168] "Request Body" body=""
	I1213 08:53:31.273195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:31.273460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:31.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:53:31.773163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:31.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:32.273159   47783 type.go:168] "Request Body" body=""
	I1213 08:53:32.273240   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:32.273577   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:32.273635   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:32.772992   47783 type.go:168] "Request Body" body=""
	I1213 08:53:32.773056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:32.773301   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:33.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:53:33.273144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:33.273448   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:33.773096   47783 type.go:168] "Request Body" body=""
	I1213 08:53:33.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:33.773514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:34.273042   47783 type.go:168] "Request Body" body=""
	I1213 08:53:34.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:34.273432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:34.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:53:34.773216   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:34.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:34.773545   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:35.273073   47783 type.go:168] "Request Body" body=""
	I1213 08:53:35.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:35.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:35.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:53:35.773105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:35.773404   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:36.273023   47783 type.go:168] "Request Body" body=""
	I1213 08:53:36.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:36.273475   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:36.773061   47783 type.go:168] "Request Body" body=""
	I1213 08:53:36.773145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:36.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:37.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:37.273135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:37.273382   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:37.273421   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:37.773097   47783 type.go:168] "Request Body" body=""
	I1213 08:53:37.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:37.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:38.273189   47783 type.go:168] "Request Body" body=""
	I1213 08:53:38.273267   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:38.273579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:38.773028   47783 type.go:168] "Request Body" body=""
	I1213 08:53:38.773102   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:38.773347   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:39.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:53:39.273143   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:39.273470   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:39.273519   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:39.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:53:39.773288   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:39.773599   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:40.273048   47783 type.go:168] "Request Body" body=""
	I1213 08:53:40.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:40.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:40.773077   47783 type.go:168] "Request Body" body=""
	I1213 08:53:40.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:40.773508   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:41.273216   47783 type.go:168] "Request Body" body=""
	I1213 08:53:41.273298   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:41.273616   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:41.273672   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:41.772986   47783 type.go:168] "Request Body" body=""
	I1213 08:53:41.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:41.773356   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:42.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:42.273143   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:42.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:42.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:42.773184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:42.773518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:43.273053   47783 type.go:168] "Request Body" body=""
	I1213 08:53:43.273132   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:43.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:43.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:43.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:43.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:43.773551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:44.273243   47783 type.go:168] "Request Body" body=""
	I1213 08:53:44.273326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:44.273652   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:44.773050   47783 type.go:168] "Request Body" body=""
	I1213 08:53:44.773126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:44.773461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:45.273208   47783 type.go:168] "Request Body" body=""
	I1213 08:53:45.273289   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:45.273720   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:45.773581   47783 type.go:168] "Request Body" body=""
	I1213 08:53:45.773651   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:45.773963   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:45.774017   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:46.275629   47783 type.go:168] "Request Body" body=""
	I1213 08:53:46.275703   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:46.275961   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:46.773754   47783 type.go:168] "Request Body" body=""
	I1213 08:53:46.773827   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:46.774161   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:47.273968   47783 type.go:168] "Request Body" body=""
	I1213 08:53:47.274043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:47.274351   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:47.773011   47783 type.go:168] "Request Body" body=""
	I1213 08:53:47.773096   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:47.773357   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:48.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:53:48.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:48.273530   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:48.273587   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:48.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:48.773183   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:48.773523   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:49.273149   47783 type.go:168] "Request Body" body=""
	I1213 08:53:49.273234   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:49.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:49.773403   47783 type.go:168] "Request Body" body=""
	I1213 08:53:49.773482   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:49.773819   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:50.273603   47783 type.go:168] "Request Body" body=""
	I1213 08:53:50.273680   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:50.273953   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:50.273999   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:50.773802   47783 type.go:168] "Request Body" body=""
	I1213 08:53:50.773880   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:50.774158   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:51.273956   47783 type.go:168] "Request Body" body=""
	I1213 08:53:51.274028   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:51.274317   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:51.772988   47783 type.go:168] "Request Body" body=""
	I1213 08:53:51.773063   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:51.773397   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:52.273098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:52.273170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:52.273477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:52.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:53:52.773187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:52.773515   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:52.773572   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:53.273241   47783 type.go:168] "Request Body" body=""
	I1213 08:53:53.273318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:53.273661   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:53.773054   47783 type.go:168] "Request Body" body=""
	I1213 08:53:53.773123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:53.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:54.273076   47783 type.go:168] "Request Body" body=""
	I1213 08:53:54.273156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:54.273507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:54.773088   47783 type.go:168] "Request Body" body=""
	I1213 08:53:54.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:54.773454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:55.273120   47783 type.go:168] "Request Body" body=""
	I1213 08:53:55.273215   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:55.273569   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:55.273618   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:55.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:55.773179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:55.773491   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:56.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:53:56.273170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:56.273651   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:56.773063   47783 type.go:168] "Request Body" body=""
	I1213 08:53:56.773135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:56.773385   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:57.273085   47783 type.go:168] "Request Body" body=""
	I1213 08:53:57.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:57.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:57.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:53:57.773297   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:57.773605   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:57.773654   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:58.273317   47783 type.go:168] "Request Body" body=""
	I1213 08:53:58.273402   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:58.273675   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:58.773360   47783 type.go:168] "Request Body" body=""
	I1213 08:53:58.773432   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:58.773734   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:59.273441   47783 type.go:168] "Request Body" body=""
	I1213 08:53:59.273519   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:59.273831   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:59.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:53:59.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:59.773701   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:59.773764   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:54:00.273228   47783 type.go:168] "Request Body" body=""
	I1213 08:54:00.273367   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:54:00.273821   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:54:00.773873   47783 type.go:168] "Request Body" body=""
	I1213 08:54:00.773932   47783 node_ready.go:38] duration metric: took 6m0.00107019s for node "functional-074420" to be "Ready" ...
	I1213 08:54:00.777004   47783 out.go:203] 
	W1213 08:54:00.779921   47783 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 08:54:00.779957   47783 out.go:285] * 
	W1213 08:54:00.782360   47783 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:54:00.785205   47783 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 08:54:08 functional-074420 containerd[5215]: time="2025-12-13T08:54:08.698359845Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:09 functional-074420 containerd[5215]: time="2025-12-13T08:54:09.720401420Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 13 08:54:09 functional-074420 containerd[5215]: time="2025-12-13T08:54:09.722569669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 13 08:54:09 functional-074420 containerd[5215]: time="2025-12-13T08:54:09.730290778Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:09 functional-074420 containerd[5215]: time="2025-12-13T08:54:09.730754570Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:10 functional-074420 containerd[5215]: time="2025-12-13T08:54:10.650163915Z" level=info msg="No images store for sha256:4895c2d9428a4414f50a4570be6c07ab95cad42d4dfd499b34f79030b39f2e5b"
	Dec 13 08:54:10 functional-074420 containerd[5215]: time="2025-12-13T08:54:10.652326470Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-074420\""
	Dec 13 08:54:10 functional-074420 containerd[5215]: time="2025-12-13T08:54:10.659849422Z" level=info msg="ImageCreate event name:\"sha256:db42618cd478ef7ef9d01e6a82b9ab1f740d59e4b8a5b13060aa761c8246fe78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:10 functional-074420 containerd[5215]: time="2025-12-13T08:54:10.660140979Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-074420\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:11 functional-074420 containerd[5215]: time="2025-12-13T08:54:11.442170817Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 13 08:54:11 functional-074420 containerd[5215]: time="2025-12-13T08:54:11.444571119Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 13 08:54:11 functional-074420 containerd[5215]: time="2025-12-13T08:54:11.446791299Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 13 08:54:11 functional-074420 containerd[5215]: time="2025-12-13T08:54:11.459554762Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.543620150Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.545804556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.555374304Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.556065389Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.577850819Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.580162225Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.582226498Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.590715099Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.714489707Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.716753137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.725375369Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.725818468Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:54:14.469503    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:14.470119    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:14.471808    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:14.472219    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:14.473423    9203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 08:54:14 up 36 min,  0 user,  load average: 0.49, 0.34, 0.50
	Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 08:54:11 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:11 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 13 08:54:11 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:11 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:11 functional-074420 kubelet[8974]: E1213 08:54:11.828537    8974 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:11 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:11 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:12 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 13 08:54:12 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:12 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:12 functional-074420 kubelet[9049]: E1213 08:54:12.548502    9049 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:12 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:12 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:13 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 13 08:54:13 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:13 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:13 functional-074420 kubelet[9098]: E1213 08:54:13.350472    9098 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:13 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:13 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:14 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 13 08:54:14 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:14 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:14 functional-074420 kubelet[9118]: E1213 08:54:14.081057    9118 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:14 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:14 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 2 (361.595182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-074420 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-074420 get pods: exit status 1 (112.406574ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-074420 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	        "Created": "2025-12-13T08:39:40.050933605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:39:40.114566965Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
	        "Name": "/functional-074420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	                "LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-074420",
	                "Source": "/var/lib/docker/volumes/functional-074420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074420",
	                "name.minikube.sigs.k8s.io": "functional-074420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
	            "SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e0:c5:f8:aa:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
	                    "EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074420",
	                        "662fb3d52b3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 2 (302.519173ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-049633 ssh findmnt -T /mount3                                                                                                                │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image build -t localhost/my-image:functional-049633 testdata/build --alsologtostderr                                                  │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ mount   │ -p functional-049633 --kill=true                                                                                                                        │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ image   │ functional-049633 image ls --format yaml --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image ls --format json --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image ls --format table --alsologtostderr                                                                                             │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image ls                                                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ delete  │ -p functional-049633                                                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ start   │ -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ start   │ -p functional-074420 --alsologtostderr -v=8                                                                                                             │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:47 UTC │                     │
	│ cache   │ functional-074420 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache add registry.k8s.io/pause:latest                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache add minikube-local-cache-test:functional-074420                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache delete minikube-local-cache-test:functional-074420                                                                              │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl images                                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │                     │
	│ cache   │ functional-074420 cache reload                                                                                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ kubectl │ functional-074420 kubectl -- --context functional-074420 get pods                                                                                       │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:47:55
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:47:55.372522   47783 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:47:55.372733   47783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:47:55.372759   47783 out.go:374] Setting ErrFile to fd 2...
	I1213 08:47:55.372779   47783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:47:55.373071   47783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:47:55.373500   47783 out.go:368] Setting JSON to false
	I1213 08:47:55.374339   47783 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1828,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:47:55.374435   47783 start.go:143] virtualization:  
	I1213 08:47:55.378014   47783 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:47:55.381059   47783 notify.go:221] Checking for updates...
	I1213 08:47:55.381456   47783 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:47:55.384645   47783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:47:55.387475   47783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:55.390285   47783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:47:55.393179   47783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 08:47:55.396170   47783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:47:55.399625   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:55.399723   47783 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:47:55.421152   47783 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:47:55.421278   47783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:47:55.479286   47783 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 08:47:55.469949512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:47:55.479410   47783 docker.go:319] overlay module found
	I1213 08:47:55.482469   47783 out.go:179] * Using the docker driver based on existing profile
	I1213 08:47:55.485237   47783 start.go:309] selected driver: docker
	I1213 08:47:55.485259   47783 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:55.485359   47783 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:47:55.485469   47783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:47:55.552137   47783 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 08:47:55.542465837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:47:55.552549   47783 cni.go:84] Creating CNI manager for ""
	I1213 08:47:55.552614   47783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:47:55.552664   47783 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:55.555904   47783 out.go:179] * Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	I1213 08:47:55.558801   47783 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 08:47:55.561846   47783 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:47:55.564866   47783 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:47:55.564922   47783 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 08:47:55.564938   47783 cache.go:65] Caching tarball of preloaded images
	I1213 08:47:55.564963   47783 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:47:55.565027   47783 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 08:47:55.565039   47783 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 08:47:55.565188   47783 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
	I1213 08:47:55.585020   47783 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:47:55.585044   47783 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:47:55.585064   47783 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:47:55.585094   47783 start.go:360] acquireMachinesLock for functional-074420: {Name:mk9a8356bf81e58530d2c2996b4da0b7487171c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:47:55.585169   47783 start.go:364] duration metric: took 45.161µs to acquireMachinesLock for "functional-074420"
	I1213 08:47:55.585195   47783 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:47:55.585204   47783 fix.go:54] fixHost starting: 
	I1213 08:47:55.585456   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:55.601925   47783 fix.go:112] recreateIfNeeded on functional-074420: state=Running err=<nil>
	W1213 08:47:55.601956   47783 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:47:55.605110   47783 out.go:252] * Updating the running docker "functional-074420" container ...
	I1213 08:47:55.605143   47783 machine.go:94] provisionDockerMachine start ...
	I1213 08:47:55.605228   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.622184   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.622521   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.622536   47783 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:47:55.770899   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:47:55.770923   47783 ubuntu.go:182] provisioning hostname "functional-074420"
	I1213 08:47:55.770990   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.788917   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.789224   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.789243   47783 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-074420 && echo "functional-074420" | sudo tee /etc/hostname
	I1213 08:47:55.944141   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:47:55.944216   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:55.963276   47783 main.go:143] libmachine: Using SSH client type: native
	I1213 08:47:55.963669   47783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:47:55.963693   47783 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-074420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-074420/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-074420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:47:56.123813   47783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:47:56.123839   47783 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 08:47:56.123865   47783 ubuntu.go:190] setting up certificates
	I1213 08:47:56.123875   47783 provision.go:84] configureAuth start
	I1213 08:47:56.123935   47783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:47:56.141934   47783 provision.go:143] copyHostCerts
	I1213 08:47:56.141983   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:47:56.142030   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 08:47:56.142044   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:47:56.142121   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 08:47:56.142216   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:47:56.142238   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 08:47:56.142247   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:47:56.142276   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 08:47:56.142329   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:47:56.142361   47783 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 08:47:56.142370   47783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:47:56.142397   47783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 08:47:56.142457   47783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.functional-074420 san=[127.0.0.1 192.168.49.2 functional-074420 localhost minikube]
	I1213 08:47:56.320875   47783 provision.go:177] copyRemoteCerts
	I1213 08:47:56.320949   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:47:56.320994   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.338054   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.442993   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 08:47:56.443052   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:47:56.459467   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 08:47:56.459650   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:47:56.476836   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 08:47:56.476894   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 08:47:56.494408   47783 provision.go:87] duration metric: took 370.509157ms to configureAuth
	I1213 08:47:56.494435   47783 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:47:56.494611   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:56.494624   47783 machine.go:97] duration metric: took 889.474725ms to provisionDockerMachine
	I1213 08:47:56.494633   47783 start.go:293] postStartSetup for "functional-074420" (driver="docker")
	I1213 08:47:56.494644   47783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:47:56.494700   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:47:56.494748   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.511710   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.615158   47783 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:47:56.618357   47783 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 08:47:56.618378   47783 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 08:47:56.618383   47783 command_runner.go:130] > VERSION_ID="12"
	I1213 08:47:56.618388   47783 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 08:47:56.618392   47783 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 08:47:56.618422   47783 command_runner.go:130] > ID=debian
	I1213 08:47:56.618436   47783 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 08:47:56.618441   47783 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 08:47:56.618448   47783 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 08:47:56.618517   47783 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:47:56.618537   47783 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:47:56.618550   47783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 08:47:56.618607   47783 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 08:47:56.618691   47783 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 08:47:56.618702   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> /etc/ssl/certs/41202.pem
	I1213 08:47:56.618783   47783 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1213 08:47:56.618792   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> /etc/test/nested/copy/4120/hosts
	I1213 08:47:56.618842   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1213 08:47:56.626162   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:47:56.643608   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1213 08:47:56.661460   47783 start.go:296] duration metric: took 166.811201ms for postStartSetup
	I1213 08:47:56.661553   47783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:47:56.661603   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.678627   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.785005   47783 command_runner.go:130] > 14%
	I1213 08:47:56.785418   47783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:47:56.789762   47783 command_runner.go:130] > 169G
	I1213 08:47:56.790146   47783 fix.go:56] duration metric: took 1.204938515s for fixHost
	I1213 08:47:56.790168   47783 start.go:83] releasing machines lock for "functional-074420", held for 1.204983079s
	I1213 08:47:56.790231   47783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:47:56.811813   47783 ssh_runner.go:195] Run: cat /version.json
	I1213 08:47:56.811877   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.812180   47783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:47:56.812227   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:56.839131   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:56.843453   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:57.035647   47783 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 08:47:57.038511   47783 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 08:47:57.038690   47783 ssh_runner.go:195] Run: systemctl --version
	I1213 08:47:57.044708   47783 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 08:47:57.044761   47783 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 08:47:57.045134   47783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 08:47:57.049401   47783 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 08:47:57.049443   47783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:47:57.049503   47783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:47:57.057127   47783 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:47:57.057158   47783 start.go:496] detecting cgroup driver to use...
	I1213 08:47:57.057211   47783 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:47:57.057279   47783 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 08:47:57.072743   47783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:47:57.086014   47783 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:47:57.086118   47783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:47:57.102029   47783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:47:57.115088   47783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:47:57.226726   47783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:47:57.347870   47783 docker.go:234] disabling docker service ...
	I1213 08:47:57.347940   47783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:47:57.363202   47783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:47:57.377010   47783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:47:57.506500   47783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:47:57.649131   47783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:47:57.662497   47783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:47:57.677018   47783 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1213 08:47:57.678207   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:47:57.688555   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 08:47:57.698272   47783 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:47:57.698370   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:47:57.707500   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:47:57.716692   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:47:57.725739   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:47:57.734886   47783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:47:57.743485   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:47:57.753073   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:47:57.761993   47783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:47:57.770719   47783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:47:57.777695   47783 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 08:47:57.778683   47783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:47:57.786237   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:57.908393   47783 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:47:58.046253   47783 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 08:47:58.046368   47783 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 08:47:58.050493   47783 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1213 08:47:58.050558   47783 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 08:47:58.050578   47783 command_runner.go:130] > Device: 0,73	Inode: 1610        Links: 1
	I1213 08:47:58.050603   47783 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:47:58.050636   47783 command_runner.go:130] > Access: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050663   47783 command_runner.go:130] > Modify: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050685   47783 command_runner.go:130] > Change: 2025-12-13 08:47:57.999453217 +0000
	I1213 08:47:58.050720   47783 command_runner.go:130] >  Birth: -
	I1213 08:47:58.050927   47783 start.go:564] Will wait 60s for crictl version
	I1213 08:47:58.051002   47783 ssh_runner.go:195] Run: which crictl
	I1213 08:47:58.054661   47783 command_runner.go:130] > /usr/local/bin/crictl
	I1213 08:47:58.054852   47783 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:47:58.077876   47783 command_runner.go:130] > Version:  0.1.0
	I1213 08:47:58.077939   47783 command_runner.go:130] > RuntimeName:  containerd
	I1213 08:47:58.077961   47783 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1213 08:47:58.077985   47783 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 08:47:58.080051   47783 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 08:47:58.080159   47783 ssh_runner.go:195] Run: containerd --version
	I1213 08:47:58.100302   47783 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 08:47:58.101953   47783 ssh_runner.go:195] Run: containerd --version
	I1213 08:47:58.119235   47783 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1213 08:47:58.126521   47783 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 08:47:58.129463   47783 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:47:58.145273   47783 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 08:47:58.149369   47783 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 08:47:58.149453   47783 kubeadm.go:884] updating cluster {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:47:58.149580   47783 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:47:58.149657   47783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:47:58.174191   47783 command_runner.go:130] > {
	I1213 08:47:58.174214   47783 command_runner.go:130] >   "images":  [
	I1213 08:47:58.174219   47783 command_runner.go:130] >     {
	I1213 08:47:58.174232   47783 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 08:47:58.174237   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174242   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 08:47:58.174246   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174250   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174259   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 08:47:58.174263   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174267   47783 command_runner.go:130] >       "size":  "40636774",
	I1213 08:47:58.174271   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174275   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174278   47783 command_runner.go:130] >     },
	I1213 08:47:58.174281   47783 command_runner.go:130] >     {
	I1213 08:47:58.174289   47783 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 08:47:58.174299   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174305   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 08:47:58.174308   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174313   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174321   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 08:47:58.174328   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174332   47783 command_runner.go:130] >       "size":  "8034419",
	I1213 08:47:58.174336   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174340   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174343   47783 command_runner.go:130] >     },
	I1213 08:47:58.174349   47783 command_runner.go:130] >     {
	I1213 08:47:58.174356   47783 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 08:47:58.174361   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174366   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 08:47:58.174371   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174384   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174395   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 08:47:58.174399   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174403   47783 command_runner.go:130] >       "size":  "21168808",
	I1213 08:47:58.174409   47783 command_runner.go:130] >       "username":  "nonroot",
	I1213 08:47:58.174417   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174421   47783 command_runner.go:130] >     },
	I1213 08:47:58.174424   47783 command_runner.go:130] >     {
	I1213 08:47:58.174430   47783 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 08:47:58.174436   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174441   47783 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 08:47:58.174444   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174449   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174458   47783 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 08:47:58.174464   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174468   47783 command_runner.go:130] >       "size":  "21136588",
	I1213 08:47:58.174472   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174475   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174478   47783 command_runner.go:130] >       },
	I1213 08:47:58.174487   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174491   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174494   47783 command_runner.go:130] >     },
	I1213 08:47:58.174497   47783 command_runner.go:130] >     {
	I1213 08:47:58.174507   47783 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 08:47:58.174511   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174518   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 08:47:58.174522   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174526   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174533   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 08:47:58.174539   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174545   47783 command_runner.go:130] >       "size":  "24678359",
	I1213 08:47:58.174551   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174559   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174562   47783 command_runner.go:130] >       },
	I1213 08:47:58.174566   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174576   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174580   47783 command_runner.go:130] >     },
	I1213 08:47:58.174584   47783 command_runner.go:130] >     {
	I1213 08:47:58.174594   47783 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 08:47:58.174601   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174607   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 08:47:58.174610   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174614   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174625   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 08:47:58.174631   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174635   47783 command_runner.go:130] >       "size":  "20661043",
	I1213 08:47:58.174638   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174642   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174645   47783 command_runner.go:130] >       },
	I1213 08:47:58.174649   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174655   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174659   47783 command_runner.go:130] >     },
	I1213 08:47:58.174663   47783 command_runner.go:130] >     {
	I1213 08:47:58.174671   47783 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 08:47:58.174677   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174681   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 08:47:58.174684   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174688   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174699   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 08:47:58.174704   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174709   47783 command_runner.go:130] >       "size":  "22429671",
	I1213 08:47:58.174713   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174716   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174721   47783 command_runner.go:130] >     },
	I1213 08:47:58.174725   47783 command_runner.go:130] >     {
	I1213 08:47:58.174732   47783 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 08:47:58.174742   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174747   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 08:47:58.174753   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174757   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174765   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 08:47:58.174774   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174781   47783 command_runner.go:130] >       "size":  "15391364",
	I1213 08:47:58.174784   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174788   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.174797   47783 command_runner.go:130] >       },
	I1213 08:47:58.174802   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174808   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.174811   47783 command_runner.go:130] >     },
	I1213 08:47:58.174814   47783 command_runner.go:130] >     {
	I1213 08:47:58.174821   47783 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 08:47:58.174828   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.174833   47783 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 08:47:58.174836   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174840   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.174848   47783 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 08:47:58.174851   47783 command_runner.go:130] >       ],
	I1213 08:47:58.174855   47783 command_runner.go:130] >       "size":  "267939",
	I1213 08:47:58.174860   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.174864   47783 command_runner.go:130] >         "value":  "65535"
	I1213 08:47:58.174875   47783 command_runner.go:130] >       },
	I1213 08:47:58.174880   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.174884   47783 command_runner.go:130] >       "pinned":  true
	I1213 08:47:58.174887   47783 command_runner.go:130] >     }
	I1213 08:47:58.174890   47783 command_runner.go:130] >   ]
	I1213 08:47:58.174893   47783 command_runner.go:130] > }
	I1213 08:47:58.175043   47783 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:47:58.175056   47783 containerd.go:534] Images already preloaded, skipping extraction
	I1213 08:47:58.175117   47783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:47:58.196592   47783 command_runner.go:130] > {
	I1213 08:47:58.196612   47783 command_runner.go:130] >   "images":  [
	I1213 08:47:58.196616   47783 command_runner.go:130] >     {
	I1213 08:47:58.196626   47783 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 08:47:58.196631   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196637   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 08:47:58.196641   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196644   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196654   47783 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1213 08:47:58.196660   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196664   47783 command_runner.go:130] >       "size":  "40636774",
	I1213 08:47:58.196674   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196678   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196682   47783 command_runner.go:130] >     },
	I1213 08:47:58.196685   47783 command_runner.go:130] >     {
	I1213 08:47:58.196701   47783 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 08:47:58.196710   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196715   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 08:47:58.196719   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196723   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196732   47783 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 08:47:58.196739   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196745   47783 command_runner.go:130] >       "size":  "8034419",
	I1213 08:47:58.196753   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196757   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196764   47783 command_runner.go:130] >     },
	I1213 08:47:58.196768   47783 command_runner.go:130] >     {
	I1213 08:47:58.196782   47783 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 08:47:58.196787   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196793   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 08:47:58.196798   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196807   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196825   47783 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1213 08:47:58.196833   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196838   47783 command_runner.go:130] >       "size":  "21168808",
	I1213 08:47:58.196847   47783 command_runner.go:130] >       "username":  "nonroot",
	I1213 08:47:58.196852   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196861   47783 command_runner.go:130] >     },
	I1213 08:47:58.196864   47783 command_runner.go:130] >     {
	I1213 08:47:58.196871   47783 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 08:47:58.196875   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196880   47783 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 08:47:58.196884   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196888   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196897   47783 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1213 08:47:58.196904   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196908   47783 command_runner.go:130] >       "size":  "21136588",
	I1213 08:47:58.196912   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.196916   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.196924   47783 command_runner.go:130] >       },
	I1213 08:47:58.196929   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.196936   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.196940   47783 command_runner.go:130] >     },
	I1213 08:47:58.196943   47783 command_runner.go:130] >     {
	I1213 08:47:58.196953   47783 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 08:47:58.196958   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.196963   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 08:47:58.196968   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196973   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.196984   47783 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1213 08:47:58.196993   47783 command_runner.go:130] >       ],
	I1213 08:47:58.196998   47783 command_runner.go:130] >       "size":  "24678359",
	I1213 08:47:58.197005   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197015   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197022   47783 command_runner.go:130] >       },
	I1213 08:47:58.197030   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197034   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197037   47783 command_runner.go:130] >     },
	I1213 08:47:58.197040   47783 command_runner.go:130] >     {
	I1213 08:47:58.197048   47783 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 08:47:58.197056   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197063   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 08:47:58.197069   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197074   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197086   47783 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1213 08:47:58.197094   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197098   47783 command_runner.go:130] >       "size":  "20661043",
	I1213 08:47:58.197105   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197109   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197113   47783 command_runner.go:130] >       },
	I1213 08:47:58.197117   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197121   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197126   47783 command_runner.go:130] >     },
	I1213 08:47:58.197129   47783 command_runner.go:130] >     {
	I1213 08:47:58.197140   47783 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 08:47:58.197144   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197154   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 08:47:58.197158   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197162   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197173   47783 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 08:47:58.197180   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197185   47783 command_runner.go:130] >       "size":  "22429671",
	I1213 08:47:58.197189   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197194   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197198   47783 command_runner.go:130] >     },
	I1213 08:47:58.197201   47783 command_runner.go:130] >     {
	I1213 08:47:58.197209   47783 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 08:47:58.197216   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197225   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 08:47:58.197232   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197237   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197248   47783 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1213 08:47:58.197255   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197259   47783 command_runner.go:130] >       "size":  "15391364",
	I1213 08:47:58.197266   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197270   47783 command_runner.go:130] >         "value":  "0"
	I1213 08:47:58.197273   47783 command_runner.go:130] >       },
	I1213 08:47:58.197279   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197283   47783 command_runner.go:130] >       "pinned":  false
	I1213 08:47:58.197286   47783 command_runner.go:130] >     },
	I1213 08:47:58.197289   47783 command_runner.go:130] >     {
	I1213 08:47:58.197296   47783 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 08:47:58.197304   47783 command_runner.go:130] >       "repoTags":  [
	I1213 08:47:58.197309   47783 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 08:47:58.197313   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197320   47783 command_runner.go:130] >       "repoDigests":  [
	I1213 08:47:58.197329   47783 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1213 08:47:58.197335   47783 command_runner.go:130] >       ],
	I1213 08:47:58.197339   47783 command_runner.go:130] >       "size":  "267939",
	I1213 08:47:58.197346   47783 command_runner.go:130] >       "uid":  {
	I1213 08:47:58.197351   47783 command_runner.go:130] >         "value":  "65535"
	I1213 08:47:58.197356   47783 command_runner.go:130] >       },
	I1213 08:47:58.197362   47783 command_runner.go:130] >       "username":  "",
	I1213 08:47:58.197366   47783 command_runner.go:130] >       "pinned":  true
	I1213 08:47:58.197369   47783 command_runner.go:130] >     }
	I1213 08:47:58.197372   47783 command_runner.go:130] >   ]
	I1213 08:47:58.197375   47783 command_runner.go:130] > }
	I1213 08:47:58.199421   47783 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:47:58.199439   47783 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:47:58.199455   47783 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 08:47:58.199601   47783 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-074420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:47:58.199669   47783 ssh_runner.go:195] Run: sudo crictl info
	I1213 08:47:58.226280   47783 command_runner.go:130] > {
	I1213 08:47:58.226303   47783 command_runner.go:130] >   "cniconfig": {
	I1213 08:47:58.226310   47783 command_runner.go:130] >     "Networks": [
	I1213 08:47:58.226314   47783 command_runner.go:130] >       {
	I1213 08:47:58.226319   47783 command_runner.go:130] >         "Config": {
	I1213 08:47:58.226324   47783 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1213 08:47:58.226329   47783 command_runner.go:130] >           "Name": "cni-loopback",
	I1213 08:47:58.226333   47783 command_runner.go:130] >           "Plugins": [
	I1213 08:47:58.226336   47783 command_runner.go:130] >             {
	I1213 08:47:58.226340   47783 command_runner.go:130] >               "Network": {
	I1213 08:47:58.226344   47783 command_runner.go:130] >                 "ipam": {},
	I1213 08:47:58.226350   47783 command_runner.go:130] >                 "type": "loopback"
	I1213 08:47:58.226358   47783 command_runner.go:130] >               },
	I1213 08:47:58.226364   47783 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1213 08:47:58.226371   47783 command_runner.go:130] >             }
	I1213 08:47:58.226374   47783 command_runner.go:130] >           ],
	I1213 08:47:58.226384   47783 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1213 08:47:58.226388   47783 command_runner.go:130] >         },
	I1213 08:47:58.226398   47783 command_runner.go:130] >         "IFName": "lo"
	I1213 08:47:58.226402   47783 command_runner.go:130] >       }
	I1213 08:47:58.226405   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226410   47783 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1213 08:47:58.226415   47783 command_runner.go:130] >     "PluginDirs": [
	I1213 08:47:58.226419   47783 command_runner.go:130] >       "/opt/cni/bin"
	I1213 08:47:58.226425   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226430   47783 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1213 08:47:58.226442   47783 command_runner.go:130] >     "Prefix": "eth"
	I1213 08:47:58.226445   47783 command_runner.go:130] >   },
	I1213 08:47:58.226448   47783 command_runner.go:130] >   "config": {
	I1213 08:47:58.226454   47783 command_runner.go:130] >     "cdiSpecDirs": [
	I1213 08:47:58.226459   47783 command_runner.go:130] >       "/etc/cdi",
	I1213 08:47:58.226466   47783 command_runner.go:130] >       "/var/run/cdi"
	I1213 08:47:58.226472   47783 command_runner.go:130] >     ],
	I1213 08:47:58.226480   47783 command_runner.go:130] >     "cni": {
	I1213 08:47:58.226484   47783 command_runner.go:130] >       "binDir": "",
	I1213 08:47:58.226487   47783 command_runner.go:130] >       "binDirs": [
	I1213 08:47:58.226491   47783 command_runner.go:130] >         "/opt/cni/bin"
	I1213 08:47:58.226495   47783 command_runner.go:130] >       ],
	I1213 08:47:58.226499   47783 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1213 08:47:58.226503   47783 command_runner.go:130] >       "confTemplate": "",
	I1213 08:47:58.226507   47783 command_runner.go:130] >       "ipPref": "",
	I1213 08:47:58.226510   47783 command_runner.go:130] >       "maxConfNum": 1,
	I1213 08:47:58.226514   47783 command_runner.go:130] >       "setupSerially": false,
	I1213 08:47:58.226519   47783 command_runner.go:130] >       "useInternalLoopback": false
	I1213 08:47:58.226524   47783 command_runner.go:130] >     },
	I1213 08:47:58.226530   47783 command_runner.go:130] >     "containerd": {
	I1213 08:47:58.226538   47783 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1213 08:47:58.226543   47783 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1213 08:47:58.226548   47783 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1213 08:47:58.226552   47783 command_runner.go:130] >       "runtimes": {
	I1213 08:47:58.226557   47783 command_runner.go:130] >         "runc": {
	I1213 08:47:58.226562   47783 command_runner.go:130] >           "ContainerAnnotations": null,
	I1213 08:47:58.226566   47783 command_runner.go:130] >           "PodAnnotations": null,
	I1213 08:47:58.226570   47783 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1213 08:47:58.226575   47783 command_runner.go:130] >           "cgroupWritable": false,
	I1213 08:47:58.226580   47783 command_runner.go:130] >           "cniConfDir": "",
	I1213 08:47:58.226586   47783 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1213 08:47:58.226591   47783 command_runner.go:130] >           "io_type": "",
	I1213 08:47:58.226596   47783 command_runner.go:130] >           "options": {
	I1213 08:47:58.226601   47783 command_runner.go:130] >             "BinaryName": "",
	I1213 08:47:58.226607   47783 command_runner.go:130] >             "CriuImagePath": "",
	I1213 08:47:58.226612   47783 command_runner.go:130] >             "CriuWorkPath": "",
	I1213 08:47:58.226616   47783 command_runner.go:130] >             "IoGid": 0,
	I1213 08:47:58.226620   47783 command_runner.go:130] >             "IoUid": 0,
	I1213 08:47:58.226629   47783 command_runner.go:130] >             "NoNewKeyring": false,
	I1213 08:47:58.226633   47783 command_runner.go:130] >             "Root": "",
	I1213 08:47:58.226641   47783 command_runner.go:130] >             "ShimCgroup": "",
	I1213 08:47:58.226649   47783 command_runner.go:130] >             "SystemdCgroup": false
	I1213 08:47:58.226652   47783 command_runner.go:130] >           },
	I1213 08:47:58.226657   47783 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1213 08:47:58.226666   47783 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1213 08:47:58.226678   47783 command_runner.go:130] >           "runtimePath": "",
	I1213 08:47:58.226683   47783 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1213 08:47:58.226689   47783 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1213 08:47:58.226698   47783 command_runner.go:130] >           "snapshotter": ""
	I1213 08:47:58.226702   47783 command_runner.go:130] >         }
	I1213 08:47:58.226705   47783 command_runner.go:130] >       }
	I1213 08:47:58.226710   47783 command_runner.go:130] >     },
	I1213 08:47:58.226721   47783 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1213 08:47:58.226728   47783 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1213 08:47:58.226735   47783 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1213 08:47:58.226739   47783 command_runner.go:130] >     "disableApparmor": false,
	I1213 08:47:58.226744   47783 command_runner.go:130] >     "disableHugetlbController": true,
	I1213 08:47:58.226748   47783 command_runner.go:130] >     "disableProcMount": false,
	I1213 08:47:58.226753   47783 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1213 08:47:58.226759   47783 command_runner.go:130] >     "enableCDI": true,
	I1213 08:47:58.226763   47783 command_runner.go:130] >     "enableSelinux": false,
	I1213 08:47:58.226769   47783 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1213 08:47:58.226775   47783 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1213 08:47:58.226782   47783 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1213 08:47:58.226787   47783 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1213 08:47:58.226797   47783 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1213 08:47:58.226806   47783 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1213 08:47:58.226811   47783 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1213 08:47:58.226819   47783 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1213 08:47:58.226824   47783 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1213 08:47:58.226830   47783 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1213 08:47:58.226837   47783 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1213 08:47:58.226843   47783 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1213 08:47:58.226853   47783 command_runner.go:130] >   },
	I1213 08:47:58.226860   47783 command_runner.go:130] >   "features": {
	I1213 08:47:58.226865   47783 command_runner.go:130] >     "supplemental_groups_policy": true
	I1213 08:47:58.226868   47783 command_runner.go:130] >   },
	I1213 08:47:58.226872   47783 command_runner.go:130] >   "golang": "go1.24.9",
	I1213 08:47:58.226884   47783 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 08:47:58.226898   47783 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1213 08:47:58.226903   47783 command_runner.go:130] >   "runtimeHandlers": [
	I1213 08:47:58.226906   47783 command_runner.go:130] >     {
	I1213 08:47:58.226910   47783 command_runner.go:130] >       "features": {
	I1213 08:47:58.226915   47783 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 08:47:58.226921   47783 command_runner.go:130] >         "user_namespaces": true
	I1213 08:47:58.226925   47783 command_runner.go:130] >       }
	I1213 08:47:58.226928   47783 command_runner.go:130] >     },
	I1213 08:47:58.226934   47783 command_runner.go:130] >     {
	I1213 08:47:58.226938   47783 command_runner.go:130] >       "features": {
	I1213 08:47:58.226946   47783 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1213 08:47:58.226958   47783 command_runner.go:130] >         "user_namespaces": true
	I1213 08:47:58.226962   47783 command_runner.go:130] >       },
	I1213 08:47:58.226965   47783 command_runner.go:130] >       "name": "runc"
	I1213 08:47:58.226968   47783 command_runner.go:130] >     }
	I1213 08:47:58.226971   47783 command_runner.go:130] >   ],
	I1213 08:47:58.226976   47783 command_runner.go:130] >   "status": {
	I1213 08:47:58.226984   47783 command_runner.go:130] >     "conditions": [
	I1213 08:47:58.226989   47783 command_runner.go:130] >       {
	I1213 08:47:58.226993   47783 command_runner.go:130] >         "message": "",
	I1213 08:47:58.226997   47783 command_runner.go:130] >         "reason": "",
	I1213 08:47:58.227001   47783 command_runner.go:130] >         "status": true,
	I1213 08:47:58.227009   47783 command_runner.go:130] >         "type": "RuntimeReady"
	I1213 08:47:58.227015   47783 command_runner.go:130] >       },
	I1213 08:47:58.227019   47783 command_runner.go:130] >       {
	I1213 08:47:58.227033   47783 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1213 08:47:58.227038   47783 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1213 08:47:58.227046   47783 command_runner.go:130] >         "status": false,
	I1213 08:47:58.227054   47783 command_runner.go:130] >         "type": "NetworkReady"
	I1213 08:47:58.227057   47783 command_runner.go:130] >       },
	I1213 08:47:58.227060   47783 command_runner.go:130] >       {
	I1213 08:47:58.227083   47783 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1213 08:47:58.227094   47783 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1213 08:47:58.227100   47783 command_runner.go:130] >         "status": false,
	I1213 08:47:58.227106   47783 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1213 08:47:58.227111   47783 command_runner.go:130] >       }
	I1213 08:47:58.227115   47783 command_runner.go:130] >     ]
	I1213 08:47:58.227118   47783 command_runner.go:130] >   }
	I1213 08:47:58.227121   47783 command_runner.go:130] > }
	I1213 08:47:58.229345   47783 cni.go:84] Creating CNI manager for ""
	I1213 08:47:58.229369   47783 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:47:58.229387   47783 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:47:58.229409   47783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-074420 NodeName:functional-074420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:47:58.229527   47783 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-074420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:47:58.229596   47783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:47:58.237061   47783 command_runner.go:130] > kubeadm
	I1213 08:47:58.237081   47783 command_runner.go:130] > kubectl
	I1213 08:47:58.237086   47783 command_runner.go:130] > kubelet
	I1213 08:47:58.237099   47783 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:47:58.237151   47783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:47:58.244326   47783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 08:47:58.256951   47783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:47:58.269808   47783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 08:47:58.282145   47783 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:47:58.286872   47783 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 08:47:58.287376   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:58.410199   47783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:47:59.022103   47783 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420 for IP: 192.168.49.2
	I1213 08:47:59.022125   47783 certs.go:195] generating shared ca certs ...
	I1213 08:47:59.022141   47783 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.022352   47783 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 08:47:59.022424   47783 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 08:47:59.022444   47783 certs.go:257] generating profile certs ...
	I1213 08:47:59.022584   47783 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key
	I1213 08:47:59.022699   47783 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068
	I1213 08:47:59.022768   47783 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key
	I1213 08:47:59.022808   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 08:47:59.022855   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 08:47:59.022876   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 08:47:59.022904   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 08:47:59.022937   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 08:47:59.022973   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 08:47:59.022995   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 08:47:59.023008   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 08:47:59.023095   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 08:47:59.023154   47783 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 08:47:59.023166   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:47:59.023224   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 08:47:59.023288   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:47:59.023328   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 08:47:59.023408   47783 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:47:59.023471   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem -> /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.023492   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.023541   47783 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.024142   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:47:59.045491   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:47:59.066181   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:47:59.087256   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 08:47:59.105383   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:47:59.122457   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:47:59.141188   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:47:59.160057   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:47:59.177518   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 08:47:59.194757   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 08:47:59.211990   47783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:47:59.231728   47783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:47:59.244528   47783 ssh_runner.go:195] Run: openssl version
	I1213 08:47:59.250389   47783 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 08:47:59.250777   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.258690   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 08:47:59.266115   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269715   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269750   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.269798   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 08:47:59.310445   47783 command_runner.go:130] > 51391683
	I1213 08:47:59.310954   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:47:59.318044   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.325154   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 08:47:59.332532   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336318   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336361   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.336416   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 08:47:59.376950   47783 command_runner.go:130] > 3ec20f2e
	I1213 08:47:59.377430   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:47:59.384916   47783 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.392420   47783 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:47:59.399763   47783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403540   47783 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403584   47783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.403630   47783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:47:59.443918   47783 command_runner.go:130] > b5213941
	I1213 08:47:59.444419   47783 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:47:59.451702   47783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:47:59.455380   47783 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:47:59.455462   47783 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 08:47:59.455488   47783 command_runner.go:130] > Device: 259,1	Inode: 1311318     Links: 1
	I1213 08:47:59.455502   47783 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 08:47:59.455526   47783 command_runner.go:130] > Access: 2025-12-13 08:43:51.909308195 +0000
	I1213 08:47:59.455533   47783 command_runner.go:130] > Modify: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455538   47783 command_runner.go:130] > Change: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455544   47783 command_runner.go:130] >  Birth: 2025-12-13 08:39:48.287420904 +0000
	I1213 08:47:59.455631   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:47:59.496226   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.496712   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:47:59.538384   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.538813   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:47:59.584114   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.584598   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:47:59.624635   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.625106   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:47:59.665474   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.665947   47783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:47:59.706066   47783 command_runner.go:130] > Certificate will not expire
	I1213 08:47:59.706546   47783 kubeadm.go:401] StartCluster: {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:47:59.706648   47783 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 08:47:59.706732   47783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:47:59.735062   47783 cri.go:89] found id: ""
	I1213 08:47:59.735134   47783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:47:59.742080   47783 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 08:47:59.742103   47783 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 08:47:59.742110   47783 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 08:47:59.743039   47783 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:47:59.743056   47783 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:47:59.743123   47783 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:47:59.750746   47783 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:47:59.751192   47783 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-074420" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.751301   47783 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "functional-074420" cluster setting kubeconfig missing "functional-074420" context setting]
	I1213 08:47:59.751688   47783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.752162   47783 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.752336   47783 kapi.go:59] client config for functional-074420: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key", CAFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:47:59.752888   47783 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 08:47:59.752908   47783 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 08:47:59.752914   47783 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 08:47:59.752919   47783 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 08:47:59.752923   47783 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 08:47:59.753010   47783 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 08:47:59.753251   47783 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:47:59.761240   47783 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 08:47:59.761275   47783 kubeadm.go:602] duration metric: took 18.213538ms to restartPrimaryControlPlane
	I1213 08:47:59.761286   47783 kubeadm.go:403] duration metric: took 54.748002ms to StartCluster
	I1213 08:47:59.761334   47783 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.761412   47783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.762024   47783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:47:59.762236   47783 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 08:47:59.762588   47783 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:47:59.762635   47783 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 08:47:59.762697   47783 addons.go:70] Setting storage-provisioner=true in profile "functional-074420"
	I1213 08:47:59.762710   47783 addons.go:239] Setting addon storage-provisioner=true in "functional-074420"
	I1213 08:47:59.762736   47783 host.go:66] Checking if "functional-074420" exists ...
	I1213 08:47:59.762848   47783 addons.go:70] Setting default-storageclass=true in profile "functional-074420"
	I1213 08:47:59.762897   47783 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-074420"
	I1213 08:47:59.763226   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.763230   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.768637   47783 out.go:179] * Verifying Kubernetes components...
	I1213 08:47:59.771460   47783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:47:59.801964   47783 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:47:59.802130   47783 kapi.go:59] client config for functional-074420: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key", CAFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 08:47:59.802416   47783 addons.go:239] Setting addon default-storageclass=true in "functional-074420"
	I1213 08:47:59.802452   47783 host.go:66] Checking if "functional-074420" exists ...
	I1213 08:47:59.802879   47783 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:47:59.817615   47783 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:47:59.820407   47783 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:47:59.820438   47783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:47:59.820510   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:59.832904   47783 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:47:59.832927   47783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:47:59.832987   47783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:47:59.858620   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:47:59.867019   47783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:48:00.019931   47783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:48:00.079586   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:00.079699   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:00.772755   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.772781   47783 node_ready.go:35] waiting up to 6m0s for node "functional-074420" to be "Ready" ...
	W1213 08:48:00.772842   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.772962   47783 retry.go:31] will retry after 342.791424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773029   47783 type.go:168] "Request Body" body=""
	I1213 08:48:00.773112   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:00.773133   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773144   47783 retry.go:31] will retry after 244.896783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:00.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:00.773482   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.019052   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.079123   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.079165   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.079186   47783 retry.go:31] will retry after 233.412949ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.116509   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:01.177616   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.181525   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.181562   47783 retry.go:31] will retry after 544.217788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.273820   47783 type.go:168] "Request Body" body=""
	I1213 08:48:01.273908   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:01.274281   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.313528   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.373257   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.376997   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.377026   47783 retry.go:31] will retry after 483.901383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.726523   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:01.774029   47783 type.go:168] "Request Body" body=""
	I1213 08:48:01.774123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:01.774536   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:01.788802   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.792516   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.792575   47783 retry.go:31] will retry after 627.991267ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.861830   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:01.921846   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:01.925982   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:01.926017   47783 retry.go:31] will retry after 1.103907842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.273073   47783 type.go:168] "Request Body" body=""
	I1213 08:48:02.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:02.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:02.420977   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:02.487960   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:02.491818   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.491849   47783 retry.go:31] will retry after 452.917795ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:02.773092   47783 type.go:168] "Request Body" body=""
	I1213 08:48:02.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:02.773507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:02.773572   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:02.945881   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:03.009201   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:03.013021   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.013052   47783 retry.go:31] will retry after 1.276929732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.030115   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:03.100586   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:03.104547   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.104578   47783 retry.go:31] will retry after 1.048810244s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:03.273922   47783 type.go:168] "Request Body" body=""
	I1213 08:48:03.274012   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:03.274318   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:03.773006   47783 type.go:168] "Request Body" body=""
	I1213 08:48:03.773078   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:03.773422   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:04.153636   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:04.212539   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:04.212608   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.212632   47783 retry.go:31] will retry after 1.498415757s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.273795   47783 type.go:168] "Request Body" body=""
	I1213 08:48:04.273919   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:04.274275   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:04.290503   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:04.351966   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:04.352013   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.352031   47783 retry.go:31] will retry after 2.776026758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:04.773561   47783 type.go:168] "Request Body" body=""
	I1213 08:48:04.773631   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:04.773950   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:04.774040   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:05.273769   47783 type.go:168] "Request Body" body=""
	I1213 08:48:05.273843   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:05.274174   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:05.711960   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:05.773532   47783 type.go:168] "Request Body" body=""
	I1213 08:48:05.773602   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:05.773904   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:05.778452   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:05.778491   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:05.778510   47783 retry.go:31] will retry after 3.257875901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:06.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:48:06.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:06.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:06.773209   47783 type.go:168] "Request Body" body=""
	I1213 08:48:06.773292   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:06.773624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:07.129286   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:07.188224   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:07.188280   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:07.188301   47783 retry.go:31] will retry after 1.575099921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:07.273578   47783 type.go:168] "Request Body" body=""
	I1213 08:48:07.273669   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:07.273988   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:07.274044   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:07.773778   47783 type.go:168] "Request Body" body=""
	I1213 08:48:07.773852   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:07.774188   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.273837   47783 type.go:168] "Request Body" body=""
	I1213 08:48:08.273926   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:08.274179   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.763743   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:08.773132   47783 type.go:168] "Request Body" body=""
	I1213 08:48:08.773211   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:08.773479   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:08.823924   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:08.827716   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:08.827745   47783 retry.go:31] will retry after 4.082199617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.037077   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:09.107584   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:09.107627   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.107646   47783 retry.go:31] will retry after 4.733469164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:09.273965   47783 type.go:168] "Request Body" body=""
	I1213 08:48:09.274042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:09.274370   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:09.274422   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:09.773216   47783 type.go:168] "Request Body" body=""
	I1213 08:48:09.773289   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:09.773540   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:10.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:48:10.273139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:10.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:10.773111   47783 type.go:168] "Request Body" body=""
	I1213 08:48:10.773192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:10.773561   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:11.272986   47783 type.go:168] "Request Body" body=""
	I1213 08:48:11.273056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:11.273307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:11.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:48:11.773117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:11.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:11.773571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:12.273226   47783 type.go:168] "Request Body" body=""
	I1213 08:48:12.273318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:12.273600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:12.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:48:12.773101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:12.773360   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:12.910787   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:12.972202   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:12.972251   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:12.972270   47783 retry.go:31] will retry after 8.911795338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:13.273667   47783 type.go:168] "Request Body" body=""
	I1213 08:48:13.273742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:13.274062   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:13.773915   47783 type.go:168] "Request Body" body=""
	I1213 08:48:13.773987   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:13.774307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:13.774364   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:13.841699   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:13.900246   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:13.900294   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:13.900313   47783 retry.go:31] will retry after 6.419298699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:14.273688   47783 type.go:168] "Request Body" body=""
	I1213 08:48:14.273763   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:14.274022   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:14.773814   47783 type.go:168] "Request Body" body=""
	I1213 08:48:14.773891   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:14.774197   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:15.273923   47783 type.go:168] "Request Body" body=""
	I1213 08:48:15.273993   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:15.274354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:15.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:48:15.773092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:15.773436   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:16.273052   47783 type.go:168] "Request Body" body=""
	I1213 08:48:16.273127   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:16.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:16.273499   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:16.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:48:16.773162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:16.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:17.272982   47783 type.go:168] "Request Body" body=""
	I1213 08:48:17.273050   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:17.273294   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:17.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:48:17.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:17.773414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:18.273126   47783 type.go:168] "Request Body" body=""
	I1213 08:48:18.273210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:18.273554   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:18.273611   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:18.773038   47783 type.go:168] "Request Body" body=""
	I1213 08:48:18.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:18.773365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:19.273077   47783 type.go:168] "Request Body" body=""
	I1213 08:48:19.273151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:19.273502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:19.773409   47783 type.go:168] "Request Body" body=""
	I1213 08:48:19.773482   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:19.773764   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:20.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:48:20.273167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:20.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:20.320652   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:20.382818   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:20.382863   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:20.382884   47783 retry.go:31] will retry after 5.774410243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:20.773290   47783 type.go:168] "Request Body" body=""
	I1213 08:48:20.773364   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:20.773699   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:20.773754   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:21.273419   47783 type.go:168] "Request Body" body=""
	I1213 08:48:21.273508   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:21.273838   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:21.773521   47783 type.go:168] "Request Body" body=""
	I1213 08:48:21.773588   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:21.773835   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:21.885194   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:21.947231   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:21.947284   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:21.947318   47783 retry.go:31] will retry after 10.220008645s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:22.273767   47783 type.go:168] "Request Body" body=""
	I1213 08:48:22.273840   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:22.274159   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:22.773949   47783 type.go:168] "Request Body" body=""
	I1213 08:48:22.774022   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:22.774282   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:22.774333   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:23.273005   47783 type.go:168] "Request Body" body=""
	I1213 08:48:23.273085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:23.273357   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:23.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:48:23.773140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:23.773454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:24.273095   47783 type.go:168] "Request Body" body=""
	I1213 08:48:24.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:24.273534   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:24.773021   47783 type.go:168] "Request Body" body=""
	I1213 08:48:24.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:24.773394   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:25.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:48:25.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:25.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:25.273549   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:25.773403   47783 type.go:168] "Request Body" body=""
	I1213 08:48:25.773494   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:25.773798   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:26.158458   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:26.211883   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:26.215285   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:26.215316   47783 retry.go:31] will retry after 15.443420543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:26.273497   47783 type.go:168] "Request Body" body=""
	I1213 08:48:26.273568   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:26.273871   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:26.773647   47783 type.go:168] "Request Body" body=""
	I1213 08:48:26.773724   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:26.774089   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:27.273764   47783 type.go:168] "Request Body" body=""
	I1213 08:48:27.273848   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:27.274199   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:27.274258   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:27.773967   47783 type.go:168] "Request Body" body=""
	I1213 08:48:27.774040   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:27.774313   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:28.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:48:28.273130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:28.273472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:28.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:48:28.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:28.773511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:29.272971   47783 type.go:168] "Request Body" body=""
	I1213 08:48:29.273039   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:29.273299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:29.773229   47783 type.go:168] "Request Body" body=""
	I1213 08:48:29.773303   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:29.773634   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:29.773690   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:30.273344   47783 type.go:168] "Request Body" body=""
	I1213 08:48:30.273424   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:30.273761   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:30.773498   47783 type.go:168] "Request Body" body=""
	I1213 08:48:30.773573   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:30.773910   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:31.273704   47783 type.go:168] "Request Body" body=""
	I1213 08:48:31.273780   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:31.274114   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:31.773932   47783 type.go:168] "Request Body" body=""
	I1213 08:48:31.774003   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:31.774336   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:31.774389   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:32.167590   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:32.226722   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:32.226762   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:32.226781   47783 retry.go:31] will retry after 8.254164246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:32.273897   47783 type.go:168] "Request Body" body=""
	I1213 08:48:32.273972   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:32.274230   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:32.772997   47783 type.go:168] "Request Body" body=""
	I1213 08:48:32.773100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:32.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:33.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:48:33.273177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:33.273513   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:33.773160   47783 type.go:168] "Request Body" body=""
	I1213 08:48:33.773250   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:33.773621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:34.273105   47783 type.go:168] "Request Body" body=""
	I1213 08:48:34.273213   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:34.273537   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:34.273589   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:34.773220   47783 type.go:168] "Request Body" body=""
	I1213 08:48:34.773295   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:34.773625   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:35.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:48:35.273116   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:35.273367   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:35.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:48:35.773150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:35.773476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:36.273986   47783 type.go:168] "Request Body" body=""
	I1213 08:48:36.274079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:36.274424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:36.274479   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:36.773035   47783 type.go:168] "Request Body" body=""
	I1213 08:48:36.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:36.773358   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:37.273041   47783 type.go:168] "Request Body" body=""
	I1213 08:48:37.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:37.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:37.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:48:37.773130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:37.773446   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:38.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:48:38.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:38.273423   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:38.773129   47783 type.go:168] "Request Body" body=""
	I1213 08:48:38.773205   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:38.773535   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:38.773618   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:39.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:48:39.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:39.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:39.773505   47783 type.go:168] "Request Body" body=""
	I1213 08:48:39.773593   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:39.773873   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:40.273652   47783 type.go:168] "Request Body" body=""
	I1213 08:48:40.273731   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:40.274095   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:40.481720   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:48:40.548346   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:40.548381   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:40.548399   47783 retry.go:31] will retry after 23.072803829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:40.773873   47783 type.go:168] "Request Body" body=""
	I1213 08:48:40.773944   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:40.774217   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:40.774266   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:41.273996   47783 type.go:168] "Request Body" body=""
	I1213 08:48:41.274066   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:41.274319   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:41.658979   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:41.720805   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:41.720849   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:41.720869   47783 retry.go:31] will retry after 14.236359641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:41.774005   47783 type.go:168] "Request Body" body=""
	I1213 08:48:41.774085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:41.774430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:42.273146   47783 type.go:168] "Request Body" body=""
	I1213 08:48:42.273232   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:42.273601   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:42.773152   47783 type.go:168] "Request Body" body=""
	I1213 08:48:42.773219   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:42.773482   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:43.273054   47783 type.go:168] "Request Body" body=""
	I1213 08:48:43.273159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:43.273484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:43.273541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:43.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:48:43.773275   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:43.773578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:44.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:48:44.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:44.273454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:44.773108   47783 type.go:168] "Request Body" body=""
	I1213 08:48:44.773180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:44.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:45.273251   47783 type.go:168] "Request Body" body=""
	I1213 08:48:45.273329   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:45.273651   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:45.273709   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:45.773677   47783 type.go:168] "Request Body" body=""
	I1213 08:48:45.773758   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:45.774018   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:46.273828   47783 type.go:168] "Request Body" body=""
	I1213 08:48:46.273897   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:46.274229   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:46.772972   47783 type.go:168] "Request Body" body=""
	I1213 08:48:46.773043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:46.773362   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:47.273045   47783 type.go:168] "Request Body" body=""
	I1213 08:48:47.273126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:47.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:47.773139   47783 type.go:168] "Request Body" body=""
	I1213 08:48:47.773219   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:47.773579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:47.773642   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:48.273287   47783 type.go:168] "Request Body" body=""
	I1213 08:48:48.273371   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:48.273677   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:48.773341   47783 type.go:168] "Request Body" body=""
	I1213 08:48:48.773416   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:48.773680   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:49.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:48:49.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:49.273506   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:49.773247   47783 type.go:168] "Request Body" body=""
	I1213 08:48:49.773323   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:49.773653   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:49.773705   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:50.273353   47783 type.go:168] "Request Body" body=""
	I1213 08:48:50.273427   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:50.273741   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:50.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:48:50.773764   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:50.774083   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:51.273888   47783 type.go:168] "Request Body" body=""
	I1213 08:48:51.273963   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:51.274309   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:51.773836   47783 type.go:168] "Request Body" body=""
	I1213 08:48:51.773911   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:51.774215   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:51.774264   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:52.273985   47783 type.go:168] "Request Body" body=""
	I1213 08:48:52.274074   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:52.274430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:52.773081   47783 type.go:168] "Request Body" body=""
	I1213 08:48:52.773168   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:52.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:53.273059   47783 type.go:168] "Request Body" body=""
	I1213 08:48:53.273129   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:53.273441   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:53.773125   47783 type.go:168] "Request Body" body=""
	I1213 08:48:53.773200   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:53.773519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:54.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:48:54.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:54.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:54.273482   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:54.773027   47783 type.go:168] "Request Body" body=""
	I1213 08:48:54.773104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:54.773358   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.273041   47783 type.go:168] "Request Body" body=""
	I1213 08:48:55.273121   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:55.273415   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.773058   47783 type.go:168] "Request Body" body=""
	I1213 08:48:55.773138   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:55.773419   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:55.957869   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:48:56.020865   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:48:56.020923   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:56.020946   47783 retry.go:31] will retry after 43.666748427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:48:56.273019   47783 type.go:168] "Request Body" body=""
	I1213 08:48:56.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:56.273421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:56.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:48:56.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:56.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:56.773548   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:57.273202   47783 type.go:168] "Request Body" body=""
	I1213 08:48:57.273273   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:57.273598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:57.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:48:57.773100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:57.773380   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:58.273085   47783 type.go:168] "Request Body" body=""
	I1213 08:48:58.273180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:58.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:58.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:48:58.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:58.773607   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:48:58.773665   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:48:59.273927   47783 type.go:168] "Request Body" body=""
	I1213 08:48:59.273999   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:59.274264   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:48:59.773208   47783 type.go:168] "Request Body" body=""
	I1213 08:48:59.773283   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:48:59.773635   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:00.273263   47783 type.go:168] "Request Body" body=""
	I1213 08:49:00.273369   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:00.273817   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:00.773836   47783 type.go:168] "Request Body" body=""
	I1213 08:49:00.773908   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:00.774222   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:00.774277   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:01.274021   47783 type.go:168] "Request Body" body=""
	I1213 08:49:01.274100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:01.274424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:01.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:49:01.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:01.773375   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:02.272965   47783 type.go:168] "Request Body" body=""
	I1213 08:49:02.273041   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:02.273365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:02.773094   47783 type.go:168] "Request Body" body=""
	I1213 08:49:02.773195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:02.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:03.273064   47783 type.go:168] "Request Body" body=""
	I1213 08:49:03.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:03.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:03.273512   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:03.622173   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:49:03.678608   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:03.682133   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:49:03.682162   47783 retry.go:31] will retry after 22.66884586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 08:49:03.773432   47783 type.go:168] "Request Body" body=""
	I1213 08:49:03.773502   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:03.773795   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:04.273439   47783 type.go:168] "Request Body" body=""
	I1213 08:49:04.273517   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:04.273868   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:04.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:49:04.773210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:04.773461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:05.273151   47783 type.go:168] "Request Body" body=""
	I1213 08:49:05.273221   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:05.273546   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:05.273657   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:05.773252   47783 type.go:168] "Request Body" body=""
	I1213 08:49:05.773325   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:05.773682   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:06.273974   47783 type.go:168] "Request Body" body=""
	I1213 08:49:06.274043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:06.274354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:06.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:49:06.773140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:06.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:07.273190   47783 type.go:168] "Request Body" body=""
	I1213 08:49:07.273272   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:07.273599   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:07.773017   47783 type.go:168] "Request Body" body=""
	I1213 08:49:07.773084   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:07.773327   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:07.773371   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:08.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:49:08.273139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:08.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:08.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:49:08.773269   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:08.773582   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:09.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:49:09.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:09.273483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:09.773186   47783 type.go:168] "Request Body" body=""
	I1213 08:49:09.773263   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:09.773596   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:09.773650   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:10.273086   47783 type.go:168] "Request Body" body=""
	I1213 08:49:10.273160   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:10.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:10.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:49:10.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:10.773368   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:11.273030   47783 type.go:168] "Request Body" body=""
	I1213 08:49:11.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:11.273462   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:11.773903   47783 type.go:168] "Request Body" body=""
	I1213 08:49:11.773985   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:11.774302   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:11.774358   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:12.273000   47783 type.go:168] "Request Body" body=""
	I1213 08:49:12.273072   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:12.273375   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:12.773077   47783 type.go:168] "Request Body" body=""
	I1213 08:49:12.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:12.773483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:13.273179   47783 type.go:168] "Request Body" body=""
	I1213 08:49:13.273252   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:13.273585   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:13.773267   47783 type.go:168] "Request Body" body=""
	I1213 08:49:13.773337   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:13.773602   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:14.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:49:14.273179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:14.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:14.273573   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:14.773254   47783 type.go:168] "Request Body" body=""
	I1213 08:49:14.773326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:14.773613   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:15.272991   47783 type.go:168] "Request Body" body=""
	I1213 08:49:15.273071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:15.273379   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:15.773114   47783 type.go:168] "Request Body" body=""
	I1213 08:49:15.773190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:15.773537   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:16.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:49:16.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:16.273493   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:16.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:49:16.773105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:16.773399   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:16.773452   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:17.272977   47783 type.go:168] "Request Body" body=""
	I1213 08:49:17.273056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:17.273386   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:17.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:49:17.773156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:17.773469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:18.273029   47783 type.go:168] "Request Body" body=""
	I1213 08:49:18.273098   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:18.273417   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:18.773104   47783 type.go:168] "Request Body" body=""
	I1213 08:49:18.773178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:18.773501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:18.773552   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:19.273076   47783 type.go:168] "Request Body" body=""
	I1213 08:49:19.273154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:19.273462   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:19.773193   47783 type.go:168] "Request Body" body=""
	I1213 08:49:19.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:19.773602   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:20.273287   47783 type.go:168] "Request Body" body=""
	I1213 08:49:20.273359   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:20.273692   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:20.773504   47783 type.go:168] "Request Body" body=""
	I1213 08:49:20.773579   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:20.773879   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:20.773925   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:21.273640   47783 type.go:168] "Request Body" body=""
	I1213 08:49:21.273706   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:21.273955   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:21.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:49:21.773767   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:21.774073   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:22.273883   47783 type.go:168] "Request Body" body=""
	I1213 08:49:22.273954   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:22.274296   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:22.772994   47783 type.go:168] "Request Body" body=""
	I1213 08:49:22.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:22.773314   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:23.272983   47783 type.go:168] "Request Body" body=""
	I1213 08:49:23.273054   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:23.273387   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:23.273444   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:23.773118   47783 type.go:168] "Request Body" body=""
	I1213 08:49:23.773191   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:23.773501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:24.273058   47783 type.go:168] "Request Body" body=""
	I1213 08:49:24.273141   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:24.273459   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:24.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:49:24.773148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:24.773494   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:25.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:49:25.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:25.273476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:25.273529   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:25.773465   47783 type.go:168] "Request Body" body=""
	I1213 08:49:25.773532   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:25.773794   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:26.273591   47783 type.go:168] "Request Body" body=""
	I1213 08:49:26.273658   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:26.273978   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:26.351410   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:49:26.407043   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:26.410457   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:26.410551   47783 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:49:26.773965   47783 type.go:168] "Request Body" body=""
	I1213 08:49:26.774065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:26.774374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:27.272953   47783 type.go:168] "Request Body" body=""
	I1213 08:49:27.273025   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:27.273280   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:27.774019   47783 type.go:168] "Request Body" body=""
	I1213 08:49:27.774142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:27.774488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:27.774540   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:28.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:49:28.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:28.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:28.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:49:28.773153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:28.773437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:29.273108   47783 type.go:168] "Request Body" body=""
	I1213 08:49:29.273179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:29.273507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:29.773273   47783 type.go:168] "Request Body" body=""
	I1213 08:49:29.773361   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:29.773684   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:30.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:49:30.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:30.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:30.273487   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:30.773081   47783 type.go:168] "Request Body" body=""
	I1213 08:49:30.773180   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:30.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:31.273215   47783 type.go:168] "Request Body" body=""
	I1213 08:49:31.273285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:31.273560   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:31.773891   47783 type.go:168] "Request Body" body=""
	I1213 08:49:31.773957   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:31.774211   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:32.274000   47783 type.go:168] "Request Body" body=""
	I1213 08:49:32.274087   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:32.274384   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:32.274426   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:32.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:49:32.773174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:32.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:33.273961   47783 type.go:168] "Request Body" body=""
	I1213 08:49:33.274039   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:33.274308   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:33.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:49:33.773109   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:33.773455   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:34.273150   47783 type.go:168] "Request Body" body=""
	I1213 08:49:34.273226   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:34.273548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:34.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:49:34.773097   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:34.773360   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:34.773410   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:35.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:49:35.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:35.273467   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:35.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:49:35.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:35.773443   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:36.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:49:36.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:36.273468   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:36.773159   47783 type.go:168] "Request Body" body=""
	I1213 08:49:36.773245   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:36.773626   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:36.773678   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:37.273354   47783 type.go:168] "Request Body" body=""
	I1213 08:49:37.273429   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:37.273763   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:37.773002   47783 type.go:168] "Request Body" body=""
	I1213 08:49:37.773079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:37.773333   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:38.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:49:38.273122   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:38.273453   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:38.773090   47783 type.go:168] "Request Body" body=""
	I1213 08:49:38.773163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:38.773490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:39.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:49:39.273155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:39.273480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:39.273532   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:39.687941   47783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:49:39.742570   47783 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:39.746037   47783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 08:49:39.746134   47783 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 08:49:39.749234   47783 out.go:179] * Enabled addons: 
	I1213 08:49:39.751225   47783 addons.go:530] duration metric: took 1m39.988589749s for enable addons: enabled=[]
	I1213 08:49:39.773343   47783 type.go:168] "Request Body" body=""
	I1213 08:49:39.773416   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:39.773726   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:40.273095   47783 type.go:168] "Request Body" body=""
	I1213 08:49:40.273171   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:40.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:40.773067   47783 type.go:168] "Request Body" body=""
	I1213 08:49:40.773147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:40.773426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:41.273150   47783 type.go:168] "Request Body" body=""
	I1213 08:49:41.273226   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:41.273552   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:41.273610   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:41.773119   47783 type.go:168] "Request Body" body=""
	I1213 08:49:41.773205   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:41.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:42.278526   47783 type.go:168] "Request Body" body=""
	I1213 08:49:42.278597   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:42.279069   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:42.773238   47783 type.go:168] "Request Body" body=""
	I1213 08:49:42.773329   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:42.773690   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:43.273405   47783 type.go:168] "Request Body" body=""
	I1213 08:49:43.273484   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:43.273806   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:43.273861   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:43.773568   47783 type.go:168] "Request Body" body=""
	I1213 08:49:43.773638   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:43.773886   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:44.273754   47783 type.go:168] "Request Body" body=""
	I1213 08:49:44.273826   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:44.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:44.773918   47783 type.go:168] "Request Body" body=""
	I1213 08:49:44.773997   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:44.774334   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:45.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:49:45.273137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:45.273511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:45.773242   47783 type.go:168] "Request Body" body=""
	I1213 08:49:45.773316   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:45.773611   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:45.773658   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:46.273330   47783 type.go:168] "Request Body" body=""
	I1213 08:49:46.273412   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:46.273751   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:46.773437   47783 type.go:168] "Request Body" body=""
	I1213 08:49:46.773504   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:46.773768   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:47.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:49:47.273144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:47.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:47.773097   47783 type.go:168] "Request Body" body=""
	I1213 08:49:47.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:47.773505   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:48.273049   47783 type.go:168] "Request Body" body=""
	I1213 08:49:48.273125   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:48.273438   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:48.273501   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:48.773147   47783 type.go:168] "Request Body" body=""
	I1213 08:49:48.773223   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:48.773560   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:49.273284   47783 type.go:168] "Request Body" body=""
	I1213 08:49:49.273357   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:49.273664   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:49.773223   47783 type.go:168] "Request Body" body=""
	I1213 08:49:49.773311   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:49.773649   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:50.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:49:50.273201   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:50.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:50.273544   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:50.773094   47783 type.go:168] "Request Body" body=""
	I1213 08:49:50.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:50.773510   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:51.273186   47783 type.go:168] "Request Body" body=""
	I1213 08:49:51.273280   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:51.273547   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:51.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:49:51.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:51.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:52.273068   47783 type.go:168] "Request Body" body=""
	I1213 08:49:52.273164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:52.273473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:52.773028   47783 type.go:168] "Request Body" body=""
	I1213 08:49:52.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:52.773409   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:52.773465   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:53.273131   47783 type.go:168] "Request Body" body=""
	I1213 08:49:53.273206   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:53.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:53.773165   47783 type.go:168] "Request Body" body=""
	I1213 08:49:53.773244   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:53.773580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:54.273021   47783 type.go:168] "Request Body" body=""
	I1213 08:49:54.273089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:54.273345   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:54.773054   47783 type.go:168] "Request Body" body=""
	I1213 08:49:54.773137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:54.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:54.773578   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:55.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:49:55.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:55.273699   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:55.773529   47783 type.go:168] "Request Body" body=""
	I1213 08:49:55.773602   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:55.773850   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:56.273641   47783 type.go:168] "Request Body" body=""
	I1213 08:49:56.273732   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:56.274042   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:56.773818   47783 type.go:168] "Request Body" body=""
	I1213 08:49:56.773897   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:56.774220   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:56.774270   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:57.273975   47783 type.go:168] "Request Body" body=""
	I1213 08:49:57.274045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:57.274295   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:57.772996   47783 type.go:168] "Request Body" body=""
	I1213 08:49:57.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:57.773405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:58.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:49:58.273173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:58.273494   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:58.773156   47783 type.go:168] "Request Body" body=""
	I1213 08:49:58.773264   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:58.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:49:59.273323   47783 type.go:168] "Request Body" body=""
	I1213 08:49:59.273404   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:59.273727   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:49:59.273784   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:49:59.773691   47783 type.go:168] "Request Body" body=""
	I1213 08:49:59.773775   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:49:59.774108   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:00.297017   47783 type.go:168] "Request Body" body=""
	I1213 08:50:00.297166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:00.297535   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:00.773617   47783 type.go:168] "Request Body" body=""
	I1213 08:50:00.773751   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:00.774118   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:01.273864   47783 type.go:168] "Request Body" body=""
	I1213 08:50:01.273935   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:01.274197   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:01.274238   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:01.773963   47783 type.go:168] "Request Body" body=""
	I1213 08:50:01.774042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:01.774389   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:02.273108   47783 type.go:168] "Request Body" body=""
	I1213 08:50:02.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:02.273568   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:02.773279   47783 type.go:168] "Request Body" body=""
	I1213 08:50:02.773363   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:02.773686   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:03.273398   47783 type.go:168] "Request Body" body=""
	I1213 08:50:03.273474   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:03.273787   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:03.773099   47783 type.go:168] "Request Body" body=""
	I1213 08:50:03.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:03.773516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:03.773571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:04.273225   47783 type.go:168] "Request Body" body=""
	I1213 08:50:04.273313   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:04.273579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:04.773096   47783 type.go:168] "Request Body" body=""
	I1213 08:50:04.773170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:04.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:05.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:50:05.273153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:05.273466   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:05.773045   47783 type.go:168] "Request Body" body=""
	I1213 08:50:05.773131   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:05.773429   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:06.273084   47783 type.go:168] "Request Body" body=""
	I1213 08:50:06.273161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:06.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:06.273553   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:06.773037   47783 type.go:168] "Request Body" body=""
	I1213 08:50:06.773112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:06.773416   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:07.272971   47783 type.go:168] "Request Body" body=""
	I1213 08:50:07.273045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:07.273298   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:07.773048   47783 type.go:168] "Request Body" body=""
	I1213 08:50:07.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:07.773427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:08.273096   47783 type.go:168] "Request Body" body=""
	I1213 08:50:08.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:08.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:08.773055   47783 type.go:168] "Request Body" body=""
	I1213 08:50:08.773142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:08.773458   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:08.773523   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:09.273174   47783 type.go:168] "Request Body" body=""
	I1213 08:50:09.273248   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:09.273572   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:09.773553   47783 type.go:168] "Request Body" body=""
	I1213 08:50:09.773632   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:09.773992   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:10.273723   47783 type.go:168] "Request Body" body=""
	I1213 08:50:10.273814   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:10.274202   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:10.773083   47783 type.go:168] "Request Body" body=""
	I1213 08:50:10.773168   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:10.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:10.773551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:11.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:50:11.273271   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:11.273600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:11.772991   47783 type.go:168] "Request Body" body=""
	I1213 08:50:11.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:11.773355   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:12.273063   47783 type.go:168] "Request Body" body=""
	I1213 08:50:12.273145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:12.273477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:12.773181   47783 type.go:168] "Request Body" body=""
	I1213 08:50:12.773261   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:12.773608   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:12.773666   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:13.273258   47783 type.go:168] "Request Body" body=""
	I1213 08:50:13.273334   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:13.273612   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:13.773147   47783 type.go:168] "Request Body" body=""
	I1213 08:50:13.773220   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:13.773548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:14.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:50:14.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:14.273490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:14.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:50:14.773121   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:14.773402   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:15.273165   47783 type.go:168] "Request Body" body=""
	I1213 08:50:15.273259   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:15.273603   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:15.273662   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:15.773104   47783 type.go:168] "Request Body" body=""
	I1213 08:50:15.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:15.773492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:16.273005   47783 type.go:168] "Request Body" body=""
	I1213 08:50:16.273080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:16.273324   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:16.773018   47783 type.go:168] "Request Body" body=""
	I1213 08:50:16.773092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:16.773427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:17.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:17.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:17.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:17.773161   47783 type.go:168] "Request Body" body=""
	I1213 08:50:17.773232   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:17.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:17.773616   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:18.273100   47783 type.go:168] "Request Body" body=""
	I1213 08:50:18.273173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:18.273458   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:18.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:50:18.773177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:18.773525   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:19.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:50:19.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:19.273451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:19.773113   47783 type.go:168] "Request Body" body=""
	I1213 08:50:19.773187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:19.773533   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:20.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:50:20.273111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:20.273432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:20.273484   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:20.773059   47783 type.go:168] "Request Body" body=""
	I1213 08:50:20.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:20.773472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:21.273123   47783 type.go:168] "Request Body" body=""
	I1213 08:50:21.273195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:21.273492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:21.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:50:21.773182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:21.773514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:22.273055   47783 type.go:168] "Request Body" body=""
	I1213 08:50:22.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:22.273427   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:22.773171   47783 type.go:168] "Request Body" body=""
	I1213 08:50:22.773255   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:22.773587   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:22.773638   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:23.273116   47783 type.go:168] "Request Body" body=""
	I1213 08:50:23.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:23.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:23.773176   47783 type.go:168] "Request Body" body=""
	I1213 08:50:23.773242   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:23.773565   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:24.273113   47783 type.go:168] "Request Body" body=""
	I1213 08:50:24.273208   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:24.273526   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:24.773266   47783 type.go:168] "Request Body" body=""
	I1213 08:50:24.773335   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:24.773645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:24.773691   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:25.273185   47783 type.go:168] "Request Body" body=""
	I1213 08:50:25.273256   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:25.273518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:25.773486   47783 type.go:168] "Request Body" body=""
	I1213 08:50:25.773580   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:25.773863   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:26.273638   47783 type.go:168] "Request Body" body=""
	I1213 08:50:26.273722   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:26.274046   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:26.773772   47783 type.go:168] "Request Body" body=""
	I1213 08:50:26.773849   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:26.774110   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:26.774158   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:27.273949   47783 type.go:168] "Request Body" body=""
	I1213 08:50:27.274035   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:27.274426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:27.773079   47783 type.go:168] "Request Body" body=""
	I1213 08:50:27.773157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:27.773492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:28.273060   47783 type.go:168] "Request Body" body=""
	I1213 08:50:28.273130   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:28.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:28.773138   47783 type.go:168] "Request Body" body=""
	I1213 08:50:28.773213   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:28.773534   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:29.273243   47783 type.go:168] "Request Body" body=""
	I1213 08:50:29.273332   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:29.273667   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:29.273721   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:29.773419   47783 type.go:168] "Request Body" body=""
	I1213 08:50:29.773492   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:29.773757   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:30.273091   47783 type.go:168] "Request Body" body=""
	I1213 08:50:30.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:30.273528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:30.773268   47783 type.go:168] "Request Body" body=""
	I1213 08:50:30.773342   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:30.773681   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:31.273053   47783 type.go:168] "Request Body" body=""
	I1213 08:50:31.273131   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:31.273450   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:31.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:50:31.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:31.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:31.773554   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:32.273178   47783 type.go:168] "Request Body" body=""
	I1213 08:50:32.273254   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:32.273550   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:32.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:50:32.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:32.773420   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:33.273126   47783 type.go:168] "Request Body" body=""
	I1213 08:50:33.273197   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:33.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:33.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:50:33.773311   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:33.773638   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:33.773693   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:34.273120   47783 type.go:168] "Request Body" body=""
	I1213 08:50:34.273192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:34.273450   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:34.773092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:34.773162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:34.773483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:35.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:50:35.273285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:35.273659   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:35.773467   47783 type.go:168] "Request Body" body=""
	I1213 08:50:35.773539   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:35.773795   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:35.773847   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:36.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:50:36.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:36.273501   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:36.773198   47783 type.go:168] "Request Body" body=""
	I1213 08:50:36.773272   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:36.773631   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:37.273322   47783 type.go:168] "Request Body" body=""
	I1213 08:50:37.273394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:37.273649   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:37.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:50:37.773139   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:37.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:38.273158   47783 type.go:168] "Request Body" body=""
	I1213 08:50:38.273235   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:38.273563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:38.273617   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:38.773952   47783 type.go:168] "Request Body" body=""
	I1213 08:50:38.774033   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:38.774277   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:39.272962   47783 type.go:168] "Request Body" body=""
	I1213 08:50:39.273042   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:39.273376   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:39.773154   47783 type.go:168] "Request Body" body=""
	I1213 08:50:39.773228   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:39.773559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:40.273228   47783 type.go:168] "Request Body" body=""
	I1213 08:50:40.273301   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:40.273580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:40.773572   47783 type.go:168] "Request Body" body=""
	I1213 08:50:40.773643   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:40.773972   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:40.774022   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:41.273745   47783 type.go:168] "Request Body" body=""
	I1213 08:50:41.273822   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:41.274145   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:41.773824   47783 type.go:168] "Request Body" body=""
	I1213 08:50:41.773902   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:41.774153   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:42.273992   47783 type.go:168] "Request Body" body=""
	I1213 08:50:42.274071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:42.274419   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:42.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:50:42.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:42.773457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:43.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:50:43.273175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:43.273437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:43.273477   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:43.773102   47783 type.go:168] "Request Body" body=""
	I1213 08:50:43.773177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:43.773521   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:44.273220   47783 type.go:168] "Request Body" body=""
	I1213 08:50:44.273315   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:44.273660   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:44.773011   47783 type.go:168] "Request Body" body=""
	I1213 08:50:44.773097   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:44.773359   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:45.273045   47783 type.go:168] "Request Body" body=""
	I1213 08:50:45.273133   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:45.273474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:45.273530   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:45.773107   47783 type.go:168] "Request Body" body=""
	I1213 08:50:45.773184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:45.773544   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:46.273227   47783 type.go:168] "Request Body" body=""
	I1213 08:50:46.273297   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:46.273559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:46.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:50:46.773135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:46.773469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:47.273151   47783 type.go:168] "Request Body" body=""
	I1213 08:50:47.273234   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:47.273574   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:47.273628   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:47.773031   47783 type.go:168] "Request Body" body=""
	I1213 08:50:47.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:47.773365   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:48.273067   47783 type.go:168] "Request Body" body=""
	I1213 08:50:48.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:48.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:48.773210   47783 type.go:168] "Request Body" body=""
	I1213 08:50:48.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:48.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:49.272997   47783 type.go:168] "Request Body" body=""
	I1213 08:50:49.273065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:49.273322   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:49.773187   47783 type.go:168] "Request Body" body=""
	I1213 08:50:49.773256   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:49.773595   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:49.773649   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:50.273317   47783 type.go:168] "Request Body" body=""
	I1213 08:50:50.273391   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:50.273716   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:50.773448   47783 type.go:168] "Request Body" body=""
	I1213 08:50:50.773521   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:50.773785   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:51.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:50:51.273153   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:51.273469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:51.773185   47783 type.go:168] "Request Body" body=""
	I1213 08:50:51.773266   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:51.773606   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:52.273034   47783 type.go:168] "Request Body" body=""
	I1213 08:50:52.273104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:52.273399   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:52.273451   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:52.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:50:52.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:52.773490   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:53.273950   47783 type.go:168] "Request Body" body=""
	I1213 08:50:53.274028   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:53.274337   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:53.772992   47783 type.go:168] "Request Body" body=""
	I1213 08:50:53.773065   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:53.773378   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:54.272967   47783 type.go:168] "Request Body" body=""
	I1213 08:50:54.273044   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:54.273396   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:54.773129   47783 type.go:168] "Request Body" body=""
	I1213 08:50:54.773203   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:54.773528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:54.773588   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:55.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:50:55.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:55.273415   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:55.773084   47783 type.go:168] "Request Body" body=""
	I1213 08:50:55.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:55.773463   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:56.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:50:56.273156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:56.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:56.772978   47783 type.go:168] "Request Body" body=""
	I1213 08:50:56.773045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:56.773290   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:57.272974   47783 type.go:168] "Request Body" body=""
	I1213 08:50:57.273048   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:57.273374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:57.273428   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:57.773002   47783 type.go:168] "Request Body" body=""
	I1213 08:50:57.773088   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:57.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:58.273022   47783 type.go:168] "Request Body" body=""
	I1213 08:50:58.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:58.273391   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:58.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:50:58.773196   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:58.773519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:50:59.273246   47783 type.go:168] "Request Body" body=""
	I1213 08:50:59.273324   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:59.273653   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:50:59.273705   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:50:59.773429   47783 type.go:168] "Request Body" body=""
	I1213 08:50:59.773497   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:50:59.773750   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:00.273188   47783 type.go:168] "Request Body" body=""
	I1213 08:51:00.273284   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:00.273609   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:00.773677   47783 type.go:168] "Request Body" body=""
	I1213 08:51:00.773767   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:00.774124   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:01.273906   47783 type.go:168] "Request Body" body=""
	I1213 08:51:01.273981   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:01.274248   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:01.274298   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:01.772982   47783 type.go:168] "Request Body" body=""
	I1213 08:51:01.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:01.773396   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:02.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:51:02.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:02.273510   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:02.773051   47783 type.go:168] "Request Body" body=""
	I1213 08:51:02.773122   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:02.773441   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:03.273077   47783 type.go:168] "Request Body" body=""
	I1213 08:51:03.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:03.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:03.773197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:03.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:03.773610   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:03.773666   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:04.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:51:04.273145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:04.273469   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:04.773150   47783 type.go:168] "Request Body" body=""
	I1213 08:51:04.773225   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:04.773542   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:05.273267   47783 type.go:168] "Request Body" body=""
	I1213 08:51:05.273342   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:05.273694   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:05.773534   47783 type.go:168] "Request Body" body=""
	I1213 08:51:05.773600   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:05.773861   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:05.773901   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:06.273627   47783 type.go:168] "Request Body" body=""
	I1213 08:51:06.273698   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:06.273995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:06.773786   47783 type.go:168] "Request Body" body=""
	I1213 08:51:06.773858   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:06.774165   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:07.273877   47783 type.go:168] "Request Body" body=""
	I1213 08:51:07.273959   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:07.274221   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:07.774001   47783 type.go:168] "Request Body" body=""
	I1213 08:51:07.774080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:07.774408   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:07.774461   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:08.273096   47783 type.go:168] "Request Body" body=""
	I1213 08:51:08.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:08.273511   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:08.773065   47783 type.go:168] "Request Body" body=""
	I1213 08:51:08.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:08.773665   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:09.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:51:09.273148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:09.273474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:09.773196   47783 type.go:168] "Request Body" body=""
	I1213 08:51:09.773269   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:09.773600   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:10.273256   47783 type.go:168] "Request Body" body=""
	I1213 08:51:10.273325   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:10.273572   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:10.273611   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:10.773469   47783 type.go:168] "Request Body" body=""
	I1213 08:51:10.773540   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:10.773888   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:11.273704   47783 type.go:168] "Request Body" body=""
	I1213 08:51:11.273777   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:11.274082   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:11.773860   47783 type.go:168] "Request Body" body=""
	I1213 08:51:11.773926   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:11.774166   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:12.273909   47783 type.go:168] "Request Body" body=""
	I1213 08:51:12.273991   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:12.274305   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:12.274363   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:12.773010   47783 type.go:168] "Request Body" body=""
	I1213 08:51:12.773086   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:12.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:13.272974   47783 type.go:168] "Request Body" body=""
	I1213 08:51:13.273048   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:13.273299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:13.774032   47783 type.go:168] "Request Body" body=""
	I1213 08:51:13.774103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:13.774412   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:14.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:51:14.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:14.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:14.773021   47783 type.go:168] "Request Body" body=""
	I1213 08:51:14.773090   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:14.773394   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:14.773446   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:15.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:51:15.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:15.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:15.773082   47783 type.go:168] "Request Body" body=""
	I1213 08:51:15.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:15.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:16.273178   47783 type.go:168] "Request Body" body=""
	I1213 08:51:16.273248   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:16.273492   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:16.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:16.773179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:16.773512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:16.773564   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:17.273224   47783 type.go:168] "Request Body" body=""
	I1213 08:51:17.273307   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:17.273601   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:17.773254   47783 type.go:168] "Request Body" body=""
	I1213 08:51:17.773319   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:17.773610   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:18.273315   47783 type.go:168] "Request Body" body=""
	I1213 08:51:18.273399   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:18.273714   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:18.773100   47783 type.go:168] "Request Body" body=""
	I1213 08:51:18.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:18.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:18.773613   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:19.273057   47783 type.go:168] "Request Body" body=""
	I1213 08:51:19.273134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:19.273422   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:19.773326   47783 type.go:168] "Request Body" body=""
	I1213 08:51:19.773408   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:19.773817   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:20.273652   47783 type.go:168] "Request Body" body=""
	I1213 08:51:20.273726   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:20.274023   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:20.773908   47783 type.go:168] "Request Body" body=""
	I1213 08:51:20.773981   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:20.774242   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:20.774291   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:21.272987   47783 type.go:168] "Request Body" body=""
	I1213 08:51:21.273071   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:21.273418   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:21.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:51:21.773224   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:21.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:22.273035   47783 type.go:168] "Request Body" body=""
	I1213 08:51:22.273110   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:22.273385   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:22.773093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:22.773167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:22.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:23.273121   47783 type.go:168] "Request Body" body=""
	I1213 08:51:23.273201   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:23.273516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:23.273571   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:23.773217   47783 type.go:168] "Request Body" body=""
	I1213 08:51:23.773286   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:23.773563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:24.273100   47783 type.go:168] "Request Body" body=""
	I1213 08:51:24.273174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:24.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:24.773255   47783 type.go:168] "Request Body" body=""
	I1213 08:51:24.773333   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:24.773667   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:25.273326   47783 type.go:168] "Request Body" body=""
	I1213 08:51:25.273394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:25.273641   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:25.273680   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:25.773681   47783 type.go:168] "Request Body" body=""
	I1213 08:51:25.773757   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:25.774066   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:26.273874   47783 type.go:168] "Request Body" body=""
	I1213 08:51:26.273947   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:26.274273   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:26.772979   47783 type.go:168] "Request Body" body=""
	I1213 08:51:26.773047   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:26.773307   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:27.273109   47783 type.go:168] "Request Body" body=""
	I1213 08:51:27.273206   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:27.273597   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:27.773321   47783 type.go:168] "Request Body" body=""
	I1213 08:51:27.773394   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:27.773719   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:27.773778   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:28.273019   47783 type.go:168] "Request Body" body=""
	I1213 08:51:28.273092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:28.273421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:28.773103   47783 type.go:168] "Request Body" body=""
	I1213 08:51:28.773172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:28.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:29.273187   47783 type.go:168] "Request Body" body=""
	I1213 08:51:29.273261   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:29.273583   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:29.773457   47783 type.go:168] "Request Body" body=""
	I1213 08:51:29.773543   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:29.773812   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:29.773863   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:30.273575   47783 type.go:168] "Request Body" body=""
	I1213 08:51:30.273663   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:30.273957   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:30.773908   47783 type.go:168] "Request Body" body=""
	I1213 08:51:30.773991   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:30.774329   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:31.273977   47783 type.go:168] "Request Body" body=""
	I1213 08:51:31.274058   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:31.274309   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:31.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:51:31.773126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:31.773428   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:32.273147   47783 type.go:168] "Request Body" body=""
	I1213 08:51:32.273218   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:32.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:32.273546   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:32.773025   47783 type.go:168] "Request Body" body=""
	I1213 08:51:32.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:32.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:33.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:51:33.273171   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:33.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:33.773083   47783 type.go:168] "Request Body" body=""
	I1213 08:51:33.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:33.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:34.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:51:34.273124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:34.273384   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:34.773086   47783 type.go:168] "Request Body" body=""
	I1213 08:51:34.773216   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:34.773545   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:34.773602   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:35.273264   47783 type.go:168] "Request Body" body=""
	I1213 08:51:35.273339   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:35.273673   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:35.773675   47783 type.go:168] "Request Body" body=""
	I1213 08:51:35.773742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:35.773995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:36.273812   47783 type.go:168] "Request Body" body=""
	I1213 08:51:36.273886   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:36.274208   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:36.773984   47783 type.go:168] "Request Body" body=""
	I1213 08:51:36.774056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:36.774351   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:36.774398   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:37.273032   47783 type.go:168] "Request Body" body=""
	I1213 08:51:37.273100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:37.273364   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:37.773071   47783 type.go:168] "Request Body" body=""
	I1213 08:51:37.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:37.773499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:38.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:51:38.273147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:38.273480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:38.773071   47783 type.go:168] "Request Body" body=""
	I1213 08:51:38.773150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:38.773445   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:39.273066   47783 type.go:168] "Request Body" body=""
	I1213 08:51:39.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:39.273476   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:39.273529   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:39.773199   47783 type.go:168] "Request Body" body=""
	I1213 08:51:39.773278   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:39.773580   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:40.273030   47783 type.go:168] "Request Body" body=""
	I1213 08:51:40.273100   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:40.273392   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:40.773204   47783 type.go:168] "Request Body" body=""
	I1213 08:51:40.773285   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:40.773647   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:41.273380   47783 type.go:168] "Request Body" body=""
	I1213 08:51:41.273453   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:41.273801   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:41.273854   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:41.773592   47783 type.go:168] "Request Body" body=""
	I1213 08:51:41.773662   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:41.773929   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:42.273710   47783 type.go:168] "Request Body" body=""
	I1213 08:51:42.273789   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:42.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:42.773753   47783 type.go:168] "Request Body" body=""
	I1213 08:51:42.773827   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:42.774140   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:43.273914   47783 type.go:168] "Request Body" body=""
	I1213 08:51:43.273992   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:43.274262   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:43.274314   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:43.774012   47783 type.go:168] "Request Body" body=""
	I1213 08:51:43.774089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:43.774435   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:44.273033   47783 type.go:168] "Request Body" body=""
	I1213 08:51:44.273110   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:44.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:44.773038   47783 type.go:168] "Request Body" body=""
	I1213 08:51:44.773145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:44.773485   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:45.273088   47783 type.go:168] "Request Body" body=""
	I1213 08:51:45.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:45.273517   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:45.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:51:45.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:45.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:45.773548   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:46.273137   47783 type.go:168] "Request Body" body=""
	I1213 08:51:46.273200   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:46.273447   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:46.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:51:46.773198   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:46.773498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:47.273202   47783 type.go:168] "Request Body" body=""
	I1213 08:51:47.273291   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:47.273588   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:47.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:51:47.773134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:47.773395   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:48.273117   47783 type.go:168] "Request Body" body=""
	I1213 08:51:48.273197   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:48.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:48.273569   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:48.773253   47783 type.go:168] "Request Body" body=""
	I1213 08:51:48.773339   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:48.773688   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:49.273385   47783 type.go:168] "Request Body" body=""
	I1213 08:51:49.273462   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:49.273726   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:49.773610   47783 type.go:168] "Request Body" body=""
	I1213 08:51:49.773679   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:49.773995   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:50.273790   47783 type.go:168] "Request Body" body=""
	I1213 08:51:50.273866   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:50.274187   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:50.274243   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:50.773061   47783 type.go:168] "Request Body" body=""
	I1213 08:51:50.773136   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:50.773466   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:51.273093   47783 type.go:168] "Request Body" body=""
	I1213 08:51:51.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:51.273471   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:51.773060   47783 type.go:168] "Request Body" body=""
	I1213 08:51:51.773138   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:51.773445   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:52.273028   47783 type.go:168] "Request Body" body=""
	I1213 08:51:52.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:52.273335   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:52.773020   47783 type.go:168] "Request Body" body=""
	I1213 08:51:52.773103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:52.773428   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:52.773493   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:53.273197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:53.273277   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:53.273626   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:53.773306   47783 type.go:168] "Request Body" body=""
	I1213 08:51:53.773373   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:53.773624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:54.273103   47783 type.go:168] "Request Body" body=""
	I1213 08:51:54.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:54.273587   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:54.773074   47783 type.go:168] "Request Body" body=""
	I1213 08:51:54.773155   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:54.773500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:54.773556   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:55.273197   47783 type.go:168] "Request Body" body=""
	I1213 08:51:55.273270   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:55.273520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:55.773075   47783 type.go:168] "Request Body" body=""
	I1213 08:51:55.773149   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:55.773705   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:56.273383   47783 type.go:168] "Request Body" body=""
	I1213 08:51:56.273474   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:56.274426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:56.772963   47783 type.go:168] "Request Body" body=""
	I1213 08:51:56.773031   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:56.773288   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:57.272979   47783 type.go:168] "Request Body" body=""
	I1213 08:51:57.273114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:57.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:57.273481   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:57.773116   47783 type.go:168] "Request Body" body=""
	I1213 08:51:57.773193   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:57.773526   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:58.273042   47783 type.go:168] "Request Body" body=""
	I1213 08:51:58.273108   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:58.273456   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:58.773114   47783 type.go:168] "Request Body" body=""
	I1213 08:51:58.773188   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:58.773503   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:51:59.273098   47783 type.go:168] "Request Body" body=""
	I1213 08:51:59.273220   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:59.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:51:59.273585   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:51:59.773064   47783 type.go:168] "Request Body" body=""
	I1213 08:51:59.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:51:59.773540   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:00.273530   47783 type.go:168] "Request Body" body=""
	I1213 08:52:00.273627   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:00.274021   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:00.773091   47783 type.go:168] "Request Body" body=""
	I1213 08:52:00.773167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:00.773522   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:01.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:52:01.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:01.273461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:01.773057   47783 type.go:168] "Request Body" body=""
	I1213 08:52:01.773134   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:01.773451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:01.773527   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:02.273229   47783 type.go:168] "Request Body" body=""
	I1213 08:52:02.273310   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:02.273637   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:02.773212   47783 type.go:168] "Request Body" body=""
	I1213 08:52:02.773280   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:02.773548   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:03.273115   47783 type.go:168] "Request Body" body=""
	I1213 08:52:03.273194   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:03.273565   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:03.773267   47783 type.go:168] "Request Body" body=""
	I1213 08:52:03.773352   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:03.773695   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:03.773772   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:04.273021   47783 type.go:168] "Request Body" body=""
	I1213 08:52:04.273090   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:04.273426   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:04.773087   47783 type.go:168] "Request Body" body=""
	I1213 08:52:04.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:04.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:05.273162   47783 type.go:168] "Request Body" body=""
	I1213 08:52:05.273242   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:05.273562   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:05.773515   47783 type.go:168] "Request Body" body=""
	I1213 08:52:05.773585   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:05.773843   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:05.773884   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:06.273680   47783 type.go:168] "Request Body" body=""
	I1213 08:52:06.273755   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:06.274063   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:06.773854   47783 type.go:168] "Request Body" body=""
	I1213 08:52:06.773929   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:06.774259   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:07.274012   47783 type.go:168] "Request Body" body=""
	I1213 08:52:07.274080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:07.274341   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:07.773027   47783 type.go:168] "Request Body" body=""
	I1213 08:52:07.773107   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:07.773425   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:08.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:52:08.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:08.273531   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:08.273588   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:08.773079   47783 type.go:168] "Request Body" body=""
	I1213 08:52:08.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:08.773463   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:09.273105   47783 type.go:168] "Request Body" body=""
	I1213 08:52:09.273185   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:09.273541   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:09.773271   47783 type.go:168] "Request Body" body=""
	I1213 08:52:09.773343   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:09.773683   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:10.273235   47783 type.go:168] "Request Body" body=""
	I1213 08:52:10.273308   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:10.273623   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:10.273686   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:10.773580   47783 type.go:168] "Request Body" body=""
	I1213 08:52:10.773661   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:10.773990   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:11.273663   47783 type.go:168] "Request Body" body=""
	I1213 08:52:11.273742   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:11.274065   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:11.773833   47783 type.go:168] "Request Body" body=""
	I1213 08:52:11.773902   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:11.774166   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:12.273942   47783 type.go:168] "Request Body" body=""
	I1213 08:52:12.274016   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:12.274348   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:12.274403   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:12.774024   47783 type.go:168] "Request Body" body=""
	I1213 08:52:12.774098   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:12.774431   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:13.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:13.273166   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:13.273514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:13.773196   47783 type.go:168] "Request Body" body=""
	I1213 08:52:13.773274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:13.773598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:14.273324   47783 type.go:168] "Request Body" body=""
	I1213 08:52:14.273404   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:14.273738   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:14.773419   47783 type.go:168] "Request Body" body=""
	I1213 08:52:14.773491   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:14.773809   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:14.773870   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:15.273604   47783 type.go:168] "Request Body" body=""
	I1213 08:52:15.273678   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:15.273970   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:15.773929   47783 type.go:168] "Request Body" body=""
	I1213 08:52:15.774005   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:15.774334   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:16.273966   47783 type.go:168] "Request Body" body=""
	I1213 08:52:16.274045   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:16.274328   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:16.773036   47783 type.go:168] "Request Body" body=""
	I1213 08:52:16.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:16.773432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:17.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:52:17.273192   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:17.273578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:17.273639   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:17.772950   47783 type.go:168] "Request Body" body=""
	I1213 08:52:17.773025   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:17.773278   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:18.274027   47783 type.go:168] "Request Body" body=""
	I1213 08:52:18.274101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:18.274430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:18.772996   47783 type.go:168] "Request Body" body=""
	I1213 08:52:18.773067   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:18.773402   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:19.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:52:19.273152   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:19.273456   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:19.773356   47783 type.go:168] "Request Body" body=""
	I1213 08:52:19.773432   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:19.773764   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:19.773818   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:20.273485   47783 type.go:168] "Request Body" body=""
	I1213 08:52:20.273567   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:20.273890   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:20.773865   47783 type.go:168] "Request Body" body=""
	I1213 08:52:20.773932   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:20.774231   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:21.273999   47783 type.go:168] "Request Body" body=""
	I1213 08:52:21.274069   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:21.274395   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:21.772998   47783 type.go:168] "Request Body" body=""
	I1213 08:52:21.773076   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:21.773386   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:22.273028   47783 type.go:168] "Request Body" body=""
	I1213 08:52:22.273101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:22.273413   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:22.273462   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:22.773146   47783 type.go:168] "Request Body" body=""
	I1213 08:52:22.773221   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:22.773559   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:23.273239   47783 type.go:168] "Request Body" body=""
	I1213 08:52:23.273317   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:23.273630   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:23.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:52:23.773089   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:23.773346   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:24.273099   47783 type.go:168] "Request Body" body=""
	I1213 08:52:24.273175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:24.273496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:24.273551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:24.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:52:24.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:24.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:25.273040   47783 type.go:168] "Request Body" body=""
	I1213 08:52:25.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:25.273374   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:25.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:52:25.773304   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:25.773625   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:26.273113   47783 type.go:168] "Request Body" body=""
	I1213 08:52:26.273187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:26.273523   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:26.273583   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:26.773246   47783 type.go:168] "Request Body" body=""
	I1213 08:52:26.773317   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:26.773577   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:27.273252   47783 type.go:168] "Request Body" body=""
	I1213 08:52:27.273326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:27.273645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:27.773350   47783 type.go:168] "Request Body" body=""
	I1213 08:52:27.773425   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:27.773742   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:28.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:52:28.273167   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:28.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:28.773080   47783 type.go:168] "Request Body" body=""
	I1213 08:52:28.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:28.773487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:28.773541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:29.273204   47783 type.go:168] "Request Body" body=""
	I1213 08:52:29.273284   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:29.273624   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:29.773318   47783 type.go:168] "Request Body" body=""
	I1213 08:52:29.773387   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:29.773650   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:30.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:30.273174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:30.273487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:30.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:52:30.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:30.773484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:31.273046   47783 type.go:168] "Request Body" body=""
	I1213 08:52:31.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:31.273373   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:31.273414   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:31.773116   47783 type.go:168] "Request Body" body=""
	I1213 08:52:31.773190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:31.773520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:32.273103   47783 type.go:168] "Request Body" body=""
	I1213 08:52:32.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:32.273500   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:32.773017   47783 type.go:168] "Request Body" body=""
	I1213 08:52:32.773087   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:32.773337   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:33.273004   47783 type.go:168] "Request Body" body=""
	I1213 08:52:33.273082   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:33.273417   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:33.273472   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:33.773013   47783 type.go:168] "Request Body" body=""
	I1213 08:52:33.773088   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:33.773405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:34.273043   47783 type.go:168] "Request Body" body=""
	I1213 08:52:34.273120   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:34.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:34.773152   47783 type.go:168] "Request Body" body=""
	I1213 08:52:34.773227   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:34.773558   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:35.273270   47783 type.go:168] "Request Body" body=""
	I1213 08:52:35.273350   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:35.273680   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:35.273739   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:35.773621   47783 type.go:168] "Request Body" body=""
	I1213 08:52:35.773688   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:35.773944   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:36.273767   47783 type.go:168] "Request Body" body=""
	I1213 08:52:36.273909   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:36.274226   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:36.773955   47783 type.go:168] "Request Body" body=""
	I1213 08:52:36.774030   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:36.774359   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:37.273017   47783 type.go:168] "Request Body" body=""
	I1213 08:52:37.273085   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:37.273361   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:37.773032   47783 type.go:168] "Request Body" body=""
	I1213 08:52:37.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:37.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:37.773513   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:38.273181   47783 type.go:168] "Request Body" body=""
	I1213 08:52:38.273262   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:38.273612   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:38.773291   47783 type.go:168] "Request Body" body=""
	I1213 08:52:38.773360   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:38.773665   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:39.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:52:39.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:39.273457   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:39.773368   47783 type.go:168] "Request Body" body=""
	I1213 08:52:39.773449   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:39.773774   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:39.773830   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:40.273560   47783 type.go:168] "Request Body" body=""
	I1213 08:52:40.273630   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:40.273887   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:40.773805   47783 type.go:168] "Request Body" body=""
	I1213 08:52:40.773877   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:40.774208   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:41.273838   47783 type.go:168] "Request Body" body=""
	I1213 08:52:41.273922   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:41.274250   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:41.774001   47783 type.go:168] "Request Body" body=""
	I1213 08:52:41.774079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:41.774332   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:41.774381   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:42.273093   47783 type.go:168] "Request Body" body=""
	I1213 08:52:42.273177   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:42.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:42.773228   47783 type.go:168] "Request Body" body=""
	I1213 08:52:42.773303   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:42.773598   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:43.273031   47783 type.go:168] "Request Body" body=""
	I1213 08:52:43.273104   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:43.273352   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:43.773034   47783 type.go:168] "Request Body" body=""
	I1213 08:52:43.773111   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:43.773424   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:44.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:44.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:44.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:44.273560   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:44.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:52:44.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:44.773460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:45.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:52:45.273182   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:45.273519   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:45.773133   47783 type.go:168] "Request Body" body=""
	I1213 08:52:45.773210   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:45.773516   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:46.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:52:46.273142   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:46.273568   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:46.273635   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:46.773088   47783 type.go:168] "Request Body" body=""
	I1213 08:52:46.773186   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:46.773520   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:47.273107   47783 type.go:168] "Request Body" body=""
	I1213 08:52:47.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:47.273491   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:47.774025   47783 type.go:168] "Request Body" body=""
	I1213 08:52:47.774092   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:47.774383   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:48.273111   47783 type.go:168] "Request Body" body=""
	I1213 08:52:48.273184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:48.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:48.773232   47783 type.go:168] "Request Body" body=""
	I1213 08:52:48.773322   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:48.773644   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:48.773698   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:49.273023   47783 type.go:168] "Request Body" body=""
	I1213 08:52:49.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:49.273414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:49.773139   47783 type.go:168] "Request Body" body=""
	I1213 08:52:49.773208   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:49.773528   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:50.273274   47783 type.go:168] "Request Body" body=""
	I1213 08:52:50.273347   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:50.273702   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:50.773600   47783 type.go:168] "Request Body" body=""
	I1213 08:52:50.773672   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:50.773927   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:50.773976   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:51.273754   47783 type.go:168] "Request Body" body=""
	I1213 08:52:51.273831   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:51.274146   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:51.773932   47783 type.go:168] "Request Body" body=""
	I1213 08:52:51.774002   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:51.774339   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:52.273970   47783 type.go:168] "Request Body" body=""
	I1213 08:52:52.274046   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:52.274330   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:52.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:52:52.773127   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:52.773475   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:53.273012   47783 type.go:168] "Request Body" body=""
	I1213 08:52:53.273093   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:53.273405   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:53.273464   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:53.773963   47783 type.go:168] "Request Body" body=""
	I1213 08:52:53.774030   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:53.774299   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:54.272999   47783 type.go:168] "Request Body" body=""
	I1213 08:52:54.273074   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:54.273401   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:54.773125   47783 type.go:168] "Request Body" body=""
	I1213 08:52:54.773203   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:54.773544   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:55.273047   47783 type.go:168] "Request Body" body=""
	I1213 08:52:55.273123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:55.273430   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:55.773100   47783 type.go:168] "Request Body" body=""
	I1213 08:52:55.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:55.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:55.773542   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:56.273181   47783 type.go:168] "Request Body" body=""
	I1213 08:52:56.273253   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:56.273578   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:56.773148   47783 type.go:168] "Request Body" body=""
	I1213 08:52:56.773218   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:56.773518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:57.273086   47783 type.go:168] "Request Body" body=""
	I1213 08:52:57.273163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:57.273489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:57.773089   47783 type.go:168] "Request Body" body=""
	I1213 08:52:57.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:57.773480   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:58.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:52:58.273119   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:58.273451   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:52:58.273511   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:52:58.773075   47783 type.go:168] "Request Body" body=""
	I1213 08:52:58.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:58.773467   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:59.273192   47783 type.go:168] "Request Body" body=""
	I1213 08:52:59.273273   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:59.273605   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:52:59.773326   47783 type.go:168] "Request Body" body=""
	I1213 08:52:59.773396   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:52:59.773672   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:00.273172   47783 type.go:168] "Request Body" body=""
	I1213 08:53:00.273263   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:00.273742   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:00.273809   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:00.773616   47783 type.go:168] "Request Body" body=""
	I1213 08:53:00.773702   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:00.774032   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:01.273744   47783 type.go:168] "Request Body" body=""
	I1213 08:53:01.273814   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:01.274061   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:01.773849   47783 type.go:168] "Request Body" body=""
	I1213 08:53:01.773919   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:01.774245   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:02.272981   47783 type.go:168] "Request Body" body=""
	I1213 08:53:02.273054   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:02.273367   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:02.773046   47783 type.go:168] "Request Body" body=""
	I1213 08:53:02.773118   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:02.773421   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:02.773469   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:03.273079   47783 type.go:168] "Request Body" body=""
	I1213 08:53:03.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:03.273495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:03.773073   47783 type.go:168] "Request Body" body=""
	I1213 08:53:03.773159   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:03.773488   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:04.273172   47783 type.go:168] "Request Body" body=""
	I1213 08:53:04.273245   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:04.273563   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:04.773062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:04.773147   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:04.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:04.773531   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:05.273072   47783 type.go:168] "Request Body" body=""
	I1213 08:53:05.273150   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:05.273471   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:05.773039   47783 type.go:168] "Request Body" body=""
	I1213 08:53:05.773114   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:05.773436   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:06.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:53:06.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:06.273509   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:06.773214   47783 type.go:168] "Request Body" body=""
	I1213 08:53:06.773296   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:06.773613   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:06.773678   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:07.273027   47783 type.go:168] "Request Body" body=""
	I1213 08:53:07.273103   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:07.273397   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:07.773074   47783 type.go:168] "Request Body" body=""
	I1213 08:53:07.773151   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:07.773484   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:08.273184   47783 type.go:168] "Request Body" body=""
	I1213 08:53:08.273274   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:08.273603   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:08.773033   47783 type.go:168] "Request Body" body=""
	I1213 08:53:08.773101   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:08.773392   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:09.273719   47783 type.go:168] "Request Body" body=""
	I1213 08:53:09.273791   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:09.274102   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:09.274155   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:09.773020   47783 type.go:168] "Request Body" body=""
	I1213 08:53:09.773146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:09.773495   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:10.273187   47783 type.go:168] "Request Body" body=""
	I1213 08:53:10.273265   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:10.273558   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:10.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:53:10.773290   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:10.773639   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:11.273348   47783 type.go:168] "Request Body" body=""
	I1213 08:53:11.273428   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:11.273762   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:11.773332   47783 type.go:168] "Request Body" body=""
	I1213 08:53:11.773399   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:11.773706   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:11.773757   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:12.273081   47783 type.go:168] "Request Body" body=""
	I1213 08:53:12.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:12.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:12.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:12.773181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:12.773504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:13.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:13.273148   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:13.273499   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:13.773078   47783 type.go:168] "Request Body" body=""
	I1213 08:53:13.773154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:13.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:14.273070   47783 type.go:168] "Request Body" body=""
	I1213 08:53:14.273146   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:14.273486   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:14.273541   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:14.773070   47783 type.go:168] "Request Body" body=""
	I1213 08:53:14.773144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:14.773470   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:15.273075   47783 type.go:168] "Request Body" body=""
	I1213 08:53:15.273157   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:15.273487   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:15.773087   47783 type.go:168] "Request Body" body=""
	I1213 08:53:15.773161   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:15.773478   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:16.272941   47783 type.go:168] "Request Body" body=""
	I1213 08:53:16.273007   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:16.273251   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:16.774037   47783 type.go:168] "Request Body" body=""
	I1213 08:53:16.774124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:16.774507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:16.774560   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:17.273219   47783 type.go:168] "Request Body" body=""
	I1213 08:53:17.273292   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:17.273621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:17.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:53:17.773095   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:17.773354   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:18.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:18.273140   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:18.273472   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:18.773188   47783 type.go:168] "Request Body" body=""
	I1213 08:53:18.773268   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:18.773593   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:19.272964   47783 type.go:168] "Request Body" body=""
	I1213 08:53:19.273032   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:19.273352   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:19.273401   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:19.773060   47783 type.go:168] "Request Body" body=""
	I1213 08:53:19.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:19.773489   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:20.273071   47783 type.go:168] "Request Body" body=""
	I1213 08:53:20.273152   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:20.273506   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:20.773068   47783 type.go:168] "Request Body" body=""
	I1213 08:53:20.773137   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:20.773437   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:21.273102   47783 type.go:168] "Request Body" body=""
	I1213 08:53:21.273190   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:21.273479   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:21.273526   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:21.773202   47783 type.go:168] "Request Body" body=""
	I1213 08:53:21.773283   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:21.773621   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:22.273008   47783 type.go:168] "Request Body" body=""
	I1213 08:53:22.273079   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:22.273390   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:22.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:53:22.773136   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:22.773465   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:23.273163   47783 type.go:168] "Request Body" body=""
	I1213 08:53:23.273237   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:23.273573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:23.273630   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:23.772972   47783 type.go:168] "Request Body" body=""
	I1213 08:53:23.773041   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:23.773344   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:24.273044   47783 type.go:168] "Request Body" body=""
	I1213 08:53:24.273160   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:24.273485   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:24.773194   47783 type.go:168] "Request Body" body=""
	I1213 08:53:24.773266   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:24.773573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:25.273008   47783 type.go:168] "Request Body" body=""
	I1213 08:53:25.273080   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:25.273326   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:25.773135   47783 type.go:168] "Request Body" body=""
	I1213 08:53:25.773214   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:25.773556   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:25.773610   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:26.273236   47783 type.go:168] "Request Body" body=""
	I1213 08:53:26.273334   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:26.273645   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:26.773015   47783 type.go:168] "Request Body" body=""
	I1213 08:53:26.773082   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:26.773414   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:27.273106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:27.273178   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:27.273504   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:27.773212   47783 type.go:168] "Request Body" body=""
	I1213 08:53:27.773288   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:27.773611   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:27.773662   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:28.273082   47783 type.go:168] "Request Body" body=""
	I1213 08:53:28.273158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:28.273448   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:28.773085   47783 type.go:168] "Request Body" body=""
	I1213 08:53:28.773154   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:28.773473   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:29.273101   47783 type.go:168] "Request Body" body=""
	I1213 08:53:29.273196   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:29.273483   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:29.773239   47783 type.go:168] "Request Body" body=""
	I1213 08:53:29.773318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:29.773573   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:30.273104   47783 type.go:168] "Request Body" body=""
	I1213 08:53:30.273181   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:30.273542   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:30.273593   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:30.773082   47783 type.go:168] "Request Body" body=""
	I1213 08:53:30.773174   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:30.773474   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:31.273124   47783 type.go:168] "Request Body" body=""
	I1213 08:53:31.273195   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:31.273460   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:31.773066   47783 type.go:168] "Request Body" body=""
	I1213 08:53:31.773163   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:31.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:32.273159   47783 type.go:168] "Request Body" body=""
	I1213 08:53:32.273240   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:32.273577   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:32.273635   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:32.772992   47783 type.go:168] "Request Body" body=""
	I1213 08:53:32.773056   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:32.773301   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:33.273069   47783 type.go:168] "Request Body" body=""
	I1213 08:53:33.273144   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:33.273448   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:33.773096   47783 type.go:168] "Request Body" body=""
	I1213 08:53:33.773169   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:33.773514   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:34.273042   47783 type.go:168] "Request Body" body=""
	I1213 08:53:34.273112   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:34.273432   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:34.773142   47783 type.go:168] "Request Body" body=""
	I1213 08:53:34.773216   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:34.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:34.773545   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:35.273073   47783 type.go:168] "Request Body" body=""
	I1213 08:53:35.273162   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:35.273497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:35.773024   47783 type.go:168] "Request Body" body=""
	I1213 08:53:35.773105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:35.773404   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:36.273023   47783 type.go:168] "Request Body" body=""
	I1213 08:53:36.273105   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:36.273475   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:36.773061   47783 type.go:168] "Request Body" body=""
	I1213 08:53:36.773145   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:36.773477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:37.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:37.273135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:37.273382   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:37.273421   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:37.773097   47783 type.go:168] "Request Body" body=""
	I1213 08:53:37.773175   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:37.773502   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:38.273189   47783 type.go:168] "Request Body" body=""
	I1213 08:53:38.273267   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:38.273579   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:38.773028   47783 type.go:168] "Request Body" body=""
	I1213 08:53:38.773102   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:38.773347   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:39.273065   47783 type.go:168] "Request Body" body=""
	I1213 08:53:39.273143   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:39.273470   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:39.273519   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:39.773215   47783 type.go:168] "Request Body" body=""
	I1213 08:53:39.773288   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:39.773599   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:40.273048   47783 type.go:168] "Request Body" body=""
	I1213 08:53:40.273117   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:40.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:40.773077   47783 type.go:168] "Request Body" body=""
	I1213 08:53:40.773158   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:40.773508   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:41.273216   47783 type.go:168] "Request Body" body=""
	I1213 08:53:41.273298   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:41.273616   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:41.273672   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:41.772986   47783 type.go:168] "Request Body" body=""
	I1213 08:53:41.773060   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:41.773356   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:42.273062   47783 type.go:168] "Request Body" body=""
	I1213 08:53:42.273143   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:42.273529   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:42.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:42.773184   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:42.773518   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:43.273053   47783 type.go:168] "Request Body" body=""
	I1213 08:53:43.273132   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:43.273442   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:43.773098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:43.773173   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:43.773496   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:43.773551   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:44.273243   47783 type.go:168] "Request Body" body=""
	I1213 08:53:44.273326   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:44.273652   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:44.773050   47783 type.go:168] "Request Body" body=""
	I1213 08:53:44.773126   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:44.773461   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:45.273208   47783 type.go:168] "Request Body" body=""
	I1213 08:53:45.273289   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:45.273720   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:45.773581   47783 type.go:168] "Request Body" body=""
	I1213 08:53:45.773651   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:45.773963   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:45.774017   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:46.275629   47783 type.go:168] "Request Body" body=""
	I1213 08:53:46.275703   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:46.275961   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:46.773754   47783 type.go:168] "Request Body" body=""
	I1213 08:53:46.773827   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:46.774161   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:47.273968   47783 type.go:168] "Request Body" body=""
	I1213 08:53:47.274043   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:47.274351   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:47.773011   47783 type.go:168] "Request Body" body=""
	I1213 08:53:47.773096   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:47.773357   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:48.273092   47783 type.go:168] "Request Body" body=""
	I1213 08:53:48.273172   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:48.273530   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:48.273587   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:48.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:48.773183   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:48.773523   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:49.273149   47783 type.go:168] "Request Body" body=""
	I1213 08:53:49.273234   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:49.273512   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:49.773403   47783 type.go:168] "Request Body" body=""
	I1213 08:53:49.773482   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:49.773819   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:50.273603   47783 type.go:168] "Request Body" body=""
	I1213 08:53:50.273680   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:50.273953   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:50.273999   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:50.773802   47783 type.go:168] "Request Body" body=""
	I1213 08:53:50.773880   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:50.774158   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:51.273956   47783 type.go:168] "Request Body" body=""
	I1213 08:53:51.274028   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:51.274317   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:51.772988   47783 type.go:168] "Request Body" body=""
	I1213 08:53:51.773063   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:51.773397   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:52.273098   47783 type.go:168] "Request Body" body=""
	I1213 08:53:52.273170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:52.273477   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:52.773095   47783 type.go:168] "Request Body" body=""
	I1213 08:53:52.773187   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:52.773515   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:52.773572   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:53.273241   47783 type.go:168] "Request Body" body=""
	I1213 08:53:53.273318   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:53.273661   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:53.773054   47783 type.go:168] "Request Body" body=""
	I1213 08:53:53.773123   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:53.773497   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:54.273076   47783 type.go:168] "Request Body" body=""
	I1213 08:53:54.273156   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:54.273507   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:54.773088   47783 type.go:168] "Request Body" body=""
	I1213 08:53:54.773164   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:54.773454   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:55.273120   47783 type.go:168] "Request Body" body=""
	I1213 08:53:55.273215   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:55.273569   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:55.273618   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:55.773106   47783 type.go:168] "Request Body" body=""
	I1213 08:53:55.773179   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:55.773491   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:56.273089   47783 type.go:168] "Request Body" body=""
	I1213 08:53:56.273170   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:56.273651   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:56.773063   47783 type.go:168] "Request Body" body=""
	I1213 08:53:56.773135   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:56.773385   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:57.273085   47783 type.go:168] "Request Body" body=""
	I1213 08:53:57.273165   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:57.273498   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:57.773224   47783 type.go:168] "Request Body" body=""
	I1213 08:53:57.773297   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:57.773605   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:57.773654   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:53:58.273317   47783 type.go:168] "Request Body" body=""
	I1213 08:53:58.273402   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:58.273675   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:58.773360   47783 type.go:168] "Request Body" body=""
	I1213 08:53:58.773432   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:58.773734   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:59.273441   47783 type.go:168] "Request Body" body=""
	I1213 08:53:59.273519   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:59.273831   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:53:59.773047   47783 type.go:168] "Request Body" body=""
	I1213 08:53:59.773124   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:53:59.773701   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 08:53:59.773764   47783 node_ready.go:55] error getting node "functional-074420" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-074420": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 08:54:00.273228   47783 type.go:168] "Request Body" body=""
	I1213 08:54:00.273367   47783 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-074420" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 08:54:00.273821   47783 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 08:54:00.773873   47783 type.go:168] "Request Body" body=""
	I1213 08:54:00.773932   47783 node_ready.go:38] duration metric: took 6m0.00107019s for node "functional-074420" to be "Ready" ...
	I1213 08:54:00.777004   47783 out.go:203] 
	W1213 08:54:00.779921   47783 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 08:54:00.779957   47783 out.go:285] * 
	W1213 08:54:00.782360   47783 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:54:00.785205   47783 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 08:54:08 functional-074420 containerd[5215]: time="2025-12-13T08:54:08.698359845Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:09 functional-074420 containerd[5215]: time="2025-12-13T08:54:09.720401420Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 13 08:54:09 functional-074420 containerd[5215]: time="2025-12-13T08:54:09.722569669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 13 08:54:09 functional-074420 containerd[5215]: time="2025-12-13T08:54:09.730290778Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:09 functional-074420 containerd[5215]: time="2025-12-13T08:54:09.730754570Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:10 functional-074420 containerd[5215]: time="2025-12-13T08:54:10.650163915Z" level=info msg="No images store for sha256:4895c2d9428a4414f50a4570be6c07ab95cad42d4dfd499b34f79030b39f2e5b"
	Dec 13 08:54:10 functional-074420 containerd[5215]: time="2025-12-13T08:54:10.652326470Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-074420\""
	Dec 13 08:54:10 functional-074420 containerd[5215]: time="2025-12-13T08:54:10.659849422Z" level=info msg="ImageCreate event name:\"sha256:db42618cd478ef7ef9d01e6a82b9ab1f740d59e4b8a5b13060aa761c8246fe78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:10 functional-074420 containerd[5215]: time="2025-12-13T08:54:10.660140979Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-074420\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:11 functional-074420 containerd[5215]: time="2025-12-13T08:54:11.442170817Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 13 08:54:11 functional-074420 containerd[5215]: time="2025-12-13T08:54:11.444571119Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 13 08:54:11 functional-074420 containerd[5215]: time="2025-12-13T08:54:11.446791299Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 13 08:54:11 functional-074420 containerd[5215]: time="2025-12-13T08:54:11.459554762Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.543620150Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.545804556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.555374304Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.556065389Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.577850819Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.580162225Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.582226498Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.590715099Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.714489707Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.716753137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.725375369Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 08:54:12 functional-074420 containerd[5215]: time="2025-12-13T08:54:12.725818468Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:54:16.736422    9344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:16.737190    9344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:16.738801    9344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:16.739350    9344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:54:16.740948    9344 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 08:54:16 up 36 min,  0 user,  load average: 0.61, 0.37, 0.51
	Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 08:54:13 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:14 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 13 08:54:14 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:14 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:14 functional-074420 kubelet[9118]: E1213 08:54:14.081057    9118 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:14 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:14 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:14 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 13 08:54:14 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:14 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:14 functional-074420 kubelet[9217]: E1213 08:54:14.852739    9217 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:14 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:14 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:15 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 13 08:54:15 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:15 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:15 functional-074420 kubelet[9238]: E1213 08:54:15.584927    9238 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:15 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:15 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 08:54:16 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 13 08:54:16 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:16 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 08:54:16 functional-074420 kubelet[9258]: E1213 08:54:16.335724    9258 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 08:54:16 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 08:54:16 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 2 (430.144867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-074420 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 08:57:14.443348    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:58:51.893226    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:00:14.950672    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:02:14.443323    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:03:51.894525    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-074420 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m14.105577325s)

                                                
                                                
-- stdout --
	* [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216357s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-074420 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m14.106765924s for "functional-074420" cluster.
I1213 09:06:31.937536    4120 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	        "Created": "2025-12-13T08:39:40.050933605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:39:40.114566965Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
	        "Name": "/functional-074420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	                "LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-074420",
	                "Source": "/var/lib/docker/volumes/functional-074420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074420",
	                "name.minikube.sigs.k8s.io": "functional-074420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
	            "SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e0:c5:f8:aa:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
	                    "EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074420",
	                        "662fb3d52b3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 2 (311.861062ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-049633 image build -t localhost/my-image:functional-049633 testdata/build --alsologtostderr                                                  │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ mount   │ -p functional-049633 --kill=true                                                                                                                        │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ image   │ functional-049633 image ls --format yaml --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image ls --format json --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image ls --format table --alsologtostderr                                                                                             │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image ls                                                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ delete  │ -p functional-049633                                                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ start   │ -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ start   │ -p functional-074420 --alsologtostderr -v=8                                                                                                             │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:47 UTC │                     │
	│ cache   │ functional-074420 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache add registry.k8s.io/pause:latest                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache add minikube-local-cache-test:functional-074420                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache delete minikube-local-cache-test:functional-074420                                                                              │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl images                                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │                     │
	│ cache   │ functional-074420 cache reload                                                                                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ kubectl │ functional-074420 kubectl -- --context functional-074420 get pods                                                                                       │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │                     │
	│ start   │ -p functional-074420 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:54:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:54:17.881015   53550 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:54:17.881119   53550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:54:17.881124   53550 out.go:374] Setting ErrFile to fd 2...
	I1213 08:54:17.881127   53550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:54:17.881367   53550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:54:17.881711   53550 out.go:368] Setting JSON to false
	I1213 08:54:17.882486   53550 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2210,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:54:17.882543   53550 start.go:143] virtualization:  
	I1213 08:54:17.885916   53550 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:54:17.888999   53550 notify.go:221] Checking for updates...
	I1213 08:54:17.889435   53550 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:54:17.892383   53550 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:54:17.895200   53550 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:54:17.898042   53550 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:54:17.900839   53550 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 08:54:17.903626   53550 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:54:17.906955   53550 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:54:17.907037   53550 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:54:17.945038   53550 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:54:17.945157   53550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:54:18.004102   53550 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 08:54:17.99317471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:54:18.004214   53550 docker.go:319] overlay module found
	I1213 08:54:18.009730   53550 out.go:179] * Using the docker driver based on existing profile
	I1213 08:54:18.012694   53550 start.go:309] selected driver: docker
	I1213 08:54:18.012706   53550 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:54:18.012816   53550 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:54:18.012919   53550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:54:18.070601   53550 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 08:54:18.060838365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:54:18.071017   53550 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 08:54:18.071040   53550 cni.go:84] Creating CNI manager for ""
	I1213 08:54:18.071105   53550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:54:18.071147   53550 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:54:18.074420   53550 out.go:179] * Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	I1213 08:54:18.077242   53550 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 08:54:18.080227   53550 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:54:18.083176   53550 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:54:18.083216   53550 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 08:54:18.083225   53550 cache.go:65] Caching tarball of preloaded images
	I1213 08:54:18.083262   53550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:54:18.083328   53550 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 08:54:18.083337   53550 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 08:54:18.083454   53550 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
	I1213 08:54:18.104039   53550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:54:18.104049   53550 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:54:18.104071   53550 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:54:18.104097   53550 start.go:360] acquireMachinesLock for functional-074420: {Name:mk9a8356bf81e58530d2c2996b4da0b7487171c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:54:18.104173   53550 start.go:364] duration metric: took 60.013µs to acquireMachinesLock for "functional-074420"
	I1213 08:54:18.104193   53550 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:54:18.104198   53550 fix.go:54] fixHost starting: 
	I1213 08:54:18.104469   53550 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:54:18.121469   53550 fix.go:112] recreateIfNeeded on functional-074420: state=Running err=<nil>
	W1213 08:54:18.121489   53550 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:54:18.124664   53550 out.go:252] * Updating the running docker "functional-074420" container ...
	I1213 08:54:18.124700   53550 machine.go:94] provisionDockerMachine start ...
	I1213 08:54:18.124779   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.142221   53550 main.go:143] libmachine: Using SSH client type: native
	I1213 08:54:18.142535   53550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:54:18.142542   53550 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:54:18.290889   53550 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:54:18.290902   53550 ubuntu.go:182] provisioning hostname "functional-074420"
	I1213 08:54:18.290965   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.308398   53550 main.go:143] libmachine: Using SSH client type: native
	I1213 08:54:18.308699   53550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:54:18.308706   53550 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-074420 && echo "functional-074420" | sudo tee /etc/hostname
	I1213 08:54:18.463898   53550 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:54:18.463977   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.481808   53550 main.go:143] libmachine: Using SSH client type: native
	I1213 08:54:18.482113   53550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:54:18.482128   53550 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-074420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-074420/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-074420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:54:18.639897   53550 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:54:18.639913   53550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 08:54:18.639945   53550 ubuntu.go:190] setting up certificates
	I1213 08:54:18.639960   53550 provision.go:84] configureAuth start
	I1213 08:54:18.640021   53550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:54:18.657069   53550 provision.go:143] copyHostCerts
	I1213 08:54:18.657137   53550 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 08:54:18.657145   53550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:54:18.657224   53550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 08:54:18.657317   53550 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 08:54:18.657321   53550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:54:18.657345   53550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 08:54:18.657393   53550 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 08:54:18.657396   53550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:54:18.657421   53550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 08:54:18.657462   53550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.functional-074420 san=[127.0.0.1 192.168.49.2 functional-074420 localhost minikube]
	I1213 08:54:18.978851   53550 provision.go:177] copyRemoteCerts
	I1213 08:54:18.978913   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:54:18.978954   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.996497   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.099309   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:54:19.116489   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:54:19.134491   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 08:54:19.152584   53550 provision.go:87] duration metric: took 512.603195ms to configureAuth
	I1213 08:54:19.152601   53550 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:54:19.152798   53550 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:54:19.152804   53550 machine.go:97] duration metric: took 1.028099835s to provisionDockerMachine
	I1213 08:54:19.152810   53550 start.go:293] postStartSetup for "functional-074420" (driver="docker")
	I1213 08:54:19.152820   53550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:54:19.152868   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:54:19.152914   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.170238   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.275637   53550 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:54:19.280193   53550 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:54:19.280211   53550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:54:19.280223   53550 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 08:54:19.280276   53550 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 08:54:19.280348   53550 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 08:54:19.280419   53550 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1213 08:54:19.280458   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1213 08:54:19.288420   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:54:19.306689   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1213 08:54:19.324595   53550 start.go:296] duration metric: took 171.770829ms for postStartSetup
	I1213 08:54:19.324673   53550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:54:19.324742   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.347206   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.449063   53550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:54:19.453865   53550 fix.go:56] duration metric: took 1.349660427s for fixHost
	I1213 08:54:19.453881   53550 start.go:83] releasing machines lock for "functional-074420", held for 1.349700469s
	I1213 08:54:19.453945   53550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:54:19.471349   53550 ssh_runner.go:195] Run: cat /version.json
	I1213 08:54:19.471396   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.471420   53550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:54:19.471481   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.492979   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.505163   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.686546   53550 ssh_runner.go:195] Run: systemctl --version
	I1213 08:54:19.692986   53550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 08:54:19.697303   53550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:54:19.697365   53550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:54:19.705133   53550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:54:19.705146   53550 start.go:496] detecting cgroup driver to use...
	I1213 08:54:19.705176   53550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:54:19.705226   53550 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 08:54:19.720729   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:54:19.733460   53550 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:54:19.733514   53550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:54:19.748695   53550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:54:19.761831   53550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:54:19.870034   53550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:54:19.996014   53550 docker.go:234] disabling docker service ...
	I1213 08:54:19.996078   53550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:54:20.014799   53550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:54:20.030104   53550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:54:20.162441   53550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:54:20.283014   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:54:20.297184   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:54:20.311847   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:54:20.321141   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 08:54:20.330609   53550 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:54:20.330677   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:54:20.339444   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:54:20.348072   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:54:20.356752   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:54:20.365663   53550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:54:20.373861   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:54:20.383214   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:54:20.392296   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:54:20.401182   53550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:54:20.408521   53550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:54:20.415857   53550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:54:20.524736   53550 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:54:20.667475   53550 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 08:54:20.667553   53550 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 08:54:20.671249   53550 start.go:564] Will wait 60s for crictl version
	I1213 08:54:20.671308   53550 ssh_runner.go:195] Run: which crictl
	I1213 08:54:20.674869   53550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:54:20.699246   53550 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 08:54:20.699301   53550 ssh_runner.go:195] Run: containerd --version
	I1213 08:54:20.723418   53550 ssh_runner.go:195] Run: containerd --version
	I1213 08:54:20.748134   53550 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 08:54:20.751095   53550 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:54:20.766935   53550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 08:54:20.773949   53550 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 08:54:20.776880   53550 kubeadm.go:884] updating cluster {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:54:20.777036   53550 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:54:20.777116   53550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:54:20.804622   53550 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:54:20.804634   53550 containerd.go:534] Images already preloaded, skipping extraction
	I1213 08:54:20.804691   53550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:54:20.834431   53550 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:54:20.834444   53550 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:54:20.834451   53550 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 08:54:20.834559   53550 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-074420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:54:20.834624   53550 ssh_runner.go:195] Run: sudo crictl info
	I1213 08:54:20.867174   53550 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 08:54:20.867192   53550 cni.go:84] Creating CNI manager for ""
	I1213 08:54:20.867200   53550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:54:20.867220   53550 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:54:20.867242   53550 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-074420 NodeName:functional-074420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:54:20.867356   53550 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-074420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:54:20.867422   53550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:54:20.875127   53550 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:54:20.875185   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:54:20.882880   53550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 08:54:20.898646   53550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:54:20.911841   53550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 08:54:20.925067   53550 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:54:20.928972   53550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:54:21.047902   53550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:54:21.521591   53550 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420 for IP: 192.168.49.2
	I1213 08:54:21.521603   53550 certs.go:195] generating shared ca certs ...
	I1213 08:54:21.521617   53550 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:54:21.521756   53550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 08:54:21.521796   53550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 08:54:21.521802   53550 certs.go:257] generating profile certs ...
	I1213 08:54:21.521883   53550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key
	I1213 08:54:21.521933   53550 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068
	I1213 08:54:21.521973   53550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key
	I1213 08:54:21.522082   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 08:54:21.522113   53550 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 08:54:21.522120   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:54:21.522146   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 08:54:21.522168   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:54:21.522190   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 08:54:21.522232   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:54:21.522796   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:54:21.547463   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:54:21.565502   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:54:21.583029   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 08:54:21.600675   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:54:21.617821   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:54:21.634794   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:54:21.652088   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:54:21.669338   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:54:21.685563   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 08:54:21.702834   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 08:54:21.719220   53550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:54:21.731588   53550 ssh_runner.go:195] Run: openssl version
	I1213 08:54:21.737357   53550 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.744365   53550 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:54:21.751316   53550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.754910   53550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.754961   53550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.795815   53550 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:54:21.802933   53550 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.809987   53550 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 08:54:21.817141   53550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.820600   53550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.820668   53550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.861349   53550 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:54:21.868464   53550 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.875279   53550 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 08:54:21.882257   53550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.885950   53550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.886012   53550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.927672   53550 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:54:21.934830   53550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:54:21.938562   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:54:21.979443   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:54:22.023588   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:54:22.065341   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:54:22.106598   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:54:22.147410   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:54:22.188516   53550 kubeadm.go:401] StartCluster: {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:54:22.188592   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 08:54:22.188655   53550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:54:22.213570   53550 cri.go:89] found id: ""
	I1213 08:54:22.213647   53550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:54:22.221547   53550 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:54:22.221555   53550 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:54:22.221616   53550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:54:22.229060   53550 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.229555   53550 kubeconfig.go:125] found "functional-074420" server: "https://192.168.49.2:8441"
	I1213 08:54:22.232016   53550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:54:22.239904   53550 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 08:39:47.751417218 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 08:54:20.919594824 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 08:54:22.239924   53550 kubeadm.go:1161] stopping kube-system containers ...
	I1213 08:54:22.239936   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 08:54:22.239998   53550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:54:22.266484   53550 cri.go:89] found id: ""
	I1213 08:54:22.266565   53550 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 08:54:22.285823   53550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:54:22.293457   53550 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 13 08:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 08:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 08:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 08:43 /etc/kubernetes/scheduler.conf
	
	I1213 08:54:22.293536   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 08:54:22.301460   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 08:54:22.308894   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.308947   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:54:22.316083   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 08:54:22.323905   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.323959   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:54:22.331273   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 08:54:22.338736   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.338789   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:54:22.346320   53550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:54:22.354109   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:22.400461   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.430760   53550 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.030276983s)
	I1213 08:54:24.430822   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.648055   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.718708   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.760609   53550 api_server.go:52] waiting for apiserver process to appear ...
	I1213 08:54:24.760672   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:25.261709   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:25.761435   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:26.261759   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:26.761732   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:27.260880   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:27.760874   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:28.261721   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:28.761493   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:29.260874   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:29.761189   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:30.260883   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:30.761448   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:31.260872   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:31.761568   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:32.260967   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:32.760840   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:33.261542   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:33.761383   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:34.261771   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:34.761647   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:35.260857   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:35.760860   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:36.261572   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:36.761127   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:37.260746   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:37.760824   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:38.261446   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:38.760828   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:39.261574   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:39.760780   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:40.261697   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:40.760839   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:41.261384   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:41.761710   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:42.261116   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:42.761031   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:43.260886   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:43.760843   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:44.261147   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:44.761415   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:45.260979   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:45.761106   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:46.261523   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:46.760830   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:47.261517   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:47.760776   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:48.261547   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:48.761373   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:49.260826   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:49.761574   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:50.261136   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:50.761757   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:51.261197   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:51.761646   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:52.261300   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:52.761696   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:53.260864   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:53.761574   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:54.260768   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:54.761242   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:55.261350   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:55.761559   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:56.261198   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:56.761477   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:57.261567   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:57.760861   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:58.261803   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:58.760843   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:59.260868   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:59.761676   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:00.261517   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:00.761052   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:01.260802   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:01.760882   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:02.260924   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:02.760742   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:03.261542   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:03.761112   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:04.260813   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:04.761741   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:05.261225   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:05.760863   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:06.261426   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:06.761616   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:07.260888   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:07.760944   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:08.260874   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:08.761422   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:09.261767   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:09.761735   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:10.261376   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:10.760871   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:11.261761   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:11.761199   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:12.260928   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:12.761700   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:13.261570   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:13.761185   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:14.261662   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:14.760883   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:15.260866   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:15.761804   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:16.261789   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:16.761363   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:17.260776   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:17.761086   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:18.261288   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:18.760851   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:19.261191   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:19.761422   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:20.261577   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:20.761202   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:21.260768   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:21.761795   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:22.260945   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:22.761690   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:23.260860   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:23.761430   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:24.261657   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:24.761756   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:24.761828   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:24.786217   53550 cri.go:89] found id: ""
	I1213 08:55:24.786236   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.786243   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:24.786249   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:24.786328   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:24.809104   53550 cri.go:89] found id: ""
	I1213 08:55:24.809118   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.809125   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:24.809130   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:24.809187   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:24.832861   53550 cri.go:89] found id: ""
	I1213 08:55:24.832880   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.832887   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:24.832892   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:24.832949   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:24.856552   53550 cri.go:89] found id: ""
	I1213 08:55:24.856566   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.856573   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:24.856578   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:24.856634   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:24.879617   53550 cri.go:89] found id: ""
	I1213 08:55:24.879631   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.879638   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:24.879643   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:24.879700   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:24.905506   53550 cri.go:89] found id: ""
	I1213 08:55:24.905520   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.905526   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:24.905532   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:24.905588   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:24.930567   53550 cri.go:89] found id: ""
	I1213 08:55:24.930581   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.930587   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:24.930595   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:24.930605   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:24.961663   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:24.961679   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:25.017689   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:25.017709   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:25.035228   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:25.035257   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:25.112728   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:25.103316   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.103954   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.105905   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.106753   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.108540   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:25.103316   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.103954   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.105905   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.106753   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.108540   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:25.112738   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:25.112750   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:27.676671   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:27.686646   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:27.686705   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:27.710449   53550 cri.go:89] found id: ""
	I1213 08:55:27.710462   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.710469   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:27.710474   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:27.710531   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:27.734910   53550 cri.go:89] found id: ""
	I1213 08:55:27.734923   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.734943   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:27.734949   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:27.735007   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:27.762767   53550 cri.go:89] found id: ""
	I1213 08:55:27.762787   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.762794   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:27.762799   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:27.762853   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:27.789263   53550 cri.go:89] found id: ""
	I1213 08:55:27.789282   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.789288   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:27.789293   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:27.789352   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:27.817361   53550 cri.go:89] found id: ""
	I1213 08:55:27.817374   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.817381   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:27.817386   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:27.817444   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:27.841034   53550 cri.go:89] found id: ""
	I1213 08:55:27.841047   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.841054   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:27.841059   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:27.841114   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:27.865949   53550 cri.go:89] found id: ""
	I1213 08:55:27.865963   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.865970   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:27.865978   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:27.865988   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:27.921352   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:27.921372   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:27.934950   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:27.934966   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:28.012009   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:27.997838   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:27.998688   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.000757   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.005522   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.006421   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:27.997838   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:27.998688   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.000757   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.005522   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.006421   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:28.012023   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:28.012036   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:28.081214   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:28.081231   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:30.614736   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:30.624755   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:30.624816   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:30.650174   53550 cri.go:89] found id: ""
	I1213 08:55:30.650188   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.650195   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:30.650200   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:30.650257   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:30.675572   53550 cri.go:89] found id: ""
	I1213 08:55:30.675585   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.675592   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:30.675597   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:30.675661   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:30.700274   53550 cri.go:89] found id: ""
	I1213 08:55:30.700288   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.700295   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:30.700301   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:30.700357   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:30.724242   53550 cri.go:89] found id: ""
	I1213 08:55:30.724255   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.724262   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:30.724267   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:30.724322   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:30.749004   53550 cri.go:89] found id: ""
	I1213 08:55:30.749018   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.749025   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:30.749029   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:30.749091   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:30.772837   53550 cri.go:89] found id: ""
	I1213 08:55:30.772850   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.772857   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:30.772862   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:30.772917   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:30.796329   53550 cri.go:89] found id: ""
	I1213 08:55:30.796343   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.796350   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:30.796358   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:30.796369   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:30.806800   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:30.806816   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:30.869919   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:30.861579   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.861934   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.863644   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.864162   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.865728   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:30.861579   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.861934   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.863644   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.864162   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.865728   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:30.869929   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:30.869939   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:30.936472   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:30.936496   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:30.965152   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:30.965167   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:33.525938   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:33.536142   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:33.536204   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:33.560265   53550 cri.go:89] found id: ""
	I1213 08:55:33.560279   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.560286   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:33.560291   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:33.560346   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:33.584181   53550 cri.go:89] found id: ""
	I1213 08:55:33.584194   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.584201   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:33.584206   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:33.584261   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:33.612544   53550 cri.go:89] found id: ""
	I1213 08:55:33.612558   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.612566   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:33.612571   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:33.612628   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:33.636515   53550 cri.go:89] found id: ""
	I1213 08:55:33.636529   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.636536   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:33.636541   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:33.636601   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:33.661822   53550 cri.go:89] found id: ""
	I1213 08:55:33.661835   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.661842   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:33.661847   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:33.661909   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:33.694727   53550 cri.go:89] found id: ""
	I1213 08:55:33.694741   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.694748   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:33.694753   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:33.694812   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:33.721852   53550 cri.go:89] found id: ""
	I1213 08:55:33.721866   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.721873   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:33.721882   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:33.721892   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:33.789428   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:33.780794   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.781535   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783102   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783765   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.785318   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:33.780794   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.781535   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783102   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783765   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.785318   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:33.789438   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:33.789448   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:33.851847   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:33.851865   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:33.879583   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:33.879599   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:33.937089   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:33.937108   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:36.449743   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:36.459975   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:36.460040   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:36.485035   53550 cri.go:89] found id: ""
	I1213 08:55:36.485048   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.485055   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:36.485060   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:36.485116   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:36.509956   53550 cri.go:89] found id: ""
	I1213 08:55:36.509970   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.509977   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:36.509983   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:36.510040   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:36.535929   53550 cri.go:89] found id: ""
	I1213 08:55:36.535942   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.535949   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:36.535954   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:36.536014   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:36.560722   53550 cri.go:89] found id: ""
	I1213 08:55:36.560735   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.560742   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:36.560747   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:36.560818   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:36.586434   53550 cri.go:89] found id: ""
	I1213 08:55:36.586448   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.586455   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:36.586459   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:36.586517   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:36.615482   53550 cri.go:89] found id: ""
	I1213 08:55:36.615506   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.615531   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:36.615536   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:36.615611   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:36.642408   53550 cri.go:89] found id: ""
	I1213 08:55:36.642422   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.642439   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:36.642446   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:36.642457   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:36.669924   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:36.669946   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:36.728697   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:36.728717   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:36.740739   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:36.740759   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:36.807194   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:36.798750   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.799446   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.801401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.802010   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.803502   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:36.798750   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.799446   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.801401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.802010   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.803502   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:36.807204   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:36.807218   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:39.369875   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:39.380141   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:39.380202   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:39.407846   53550 cri.go:89] found id: ""
	I1213 08:55:39.407859   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.407867   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:39.407872   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:39.407929   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:39.432500   53550 cri.go:89] found id: ""
	I1213 08:55:39.432514   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.432520   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:39.432525   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:39.432584   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:39.457872   53550 cri.go:89] found id: ""
	I1213 08:55:39.457886   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.457893   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:39.457898   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:39.457961   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:39.483359   53550 cri.go:89] found id: ""
	I1213 08:55:39.483373   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.483379   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:39.483384   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:39.483458   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:39.508786   53550 cri.go:89] found id: ""
	I1213 08:55:39.508800   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.508807   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:39.508812   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:39.508879   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:39.533162   53550 cri.go:89] found id: ""
	I1213 08:55:39.533177   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.533184   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:39.533189   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:39.533247   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:39.558039   53550 cri.go:89] found id: ""
	I1213 08:55:39.558052   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.558059   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:39.558067   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:39.558076   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:39.618400   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:39.618423   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:39.629575   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:39.629592   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:39.694333   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:39.686653   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.687216   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.688669   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.689084   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.690548   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:39.686653   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.687216   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.688669   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.689084   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.690548   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:39.694344   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:39.694355   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:39.757320   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:39.757338   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:42.285019   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:42.297179   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:42.297241   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:42.328576   53550 cri.go:89] found id: ""
	I1213 08:55:42.328589   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.328611   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:42.328616   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:42.328678   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:42.356055   53550 cri.go:89] found id: ""
	I1213 08:55:42.356069   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.356077   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:42.356082   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:42.356141   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:42.380770   53550 cri.go:89] found id: ""
	I1213 08:55:42.380783   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.380790   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:42.380796   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:42.380866   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:42.409446   53550 cri.go:89] found id: ""
	I1213 08:55:42.409460   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.409466   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:42.409471   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:42.409530   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:42.433502   53550 cri.go:89] found id: ""
	I1213 08:55:42.433515   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.433522   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:42.433527   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:42.433583   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:42.458312   53550 cri.go:89] found id: ""
	I1213 08:55:42.458325   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.458336   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:42.458341   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:42.458401   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:42.482681   53550 cri.go:89] found id: ""
	I1213 08:55:42.482694   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.482702   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:42.482709   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:42.482719   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:42.544167   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:42.544185   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:42.572064   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:42.572079   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:42.629874   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:42.629892   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:42.641069   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:42.641084   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:42.704996   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:42.696662   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.697320   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.698873   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.699442   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.701188   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:42.696662   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.697320   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.698873   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.699442   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.701188   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:45.206980   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:45.225798   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:45.225900   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:45.260556   53550 cri.go:89] found id: ""
	I1213 08:55:45.260579   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.260586   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:45.260592   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:45.260660   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:45.300170   53550 cri.go:89] found id: ""
	I1213 08:55:45.300183   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.300190   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:45.300195   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:45.300253   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:45.335036   53550 cri.go:89] found id: ""
	I1213 08:55:45.335050   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.335057   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:45.335062   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:45.335123   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:45.366574   53550 cri.go:89] found id: ""
	I1213 08:55:45.366587   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.366594   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:45.366599   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:45.366659   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:45.391767   53550 cri.go:89] found id: ""
	I1213 08:55:45.391781   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.391788   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:45.391793   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:45.391850   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:45.416855   53550 cri.go:89] found id: ""
	I1213 08:55:45.416869   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.416876   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:45.416882   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:45.416941   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:45.441837   53550 cri.go:89] found id: ""
	I1213 08:55:45.441859   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.441867   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:45.441875   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:45.441885   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:45.499186   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:45.499203   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:45.510383   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:45.510401   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:45.577305   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:45.568662   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.569333   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.570928   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.571293   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.572858   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:45.568662   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.569333   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.570928   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.571293   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.572858   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:45.577329   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:45.577340   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:45.639739   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:45.639761   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:48.174772   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:48.185188   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:48.185250   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:48.210180   53550 cri.go:89] found id: ""
	I1213 08:55:48.210194   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.210200   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:48.210205   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:48.210268   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:48.235002   53550 cri.go:89] found id: ""
	I1213 08:55:48.235015   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.235022   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:48.235027   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:48.235085   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:48.259922   53550 cri.go:89] found id: ""
	I1213 08:55:48.259936   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.259943   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:48.259948   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:48.260007   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:48.304590   53550 cri.go:89] found id: ""
	I1213 08:55:48.304605   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.304611   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:48.304617   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:48.304675   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:48.342676   53550 cri.go:89] found id: ""
	I1213 08:55:48.342690   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.342697   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:48.342703   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:48.342759   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:48.366578   53550 cri.go:89] found id: ""
	I1213 08:55:48.366592   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.366599   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:48.366604   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:48.366673   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:48.391059   53550 cri.go:89] found id: ""
	I1213 08:55:48.391073   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.391080   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:48.391089   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:48.391099   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:48.462962   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:48.454137   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.455049   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.456663   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.457133   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.458628   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:48.454137   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.455049   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.456663   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.457133   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.458628   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:48.462973   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:48.462988   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:48.526213   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:48.526232   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:48.556890   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:48.556905   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:48.613408   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:48.613425   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:51.124505   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:51.134928   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:51.134985   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:51.159141   53550 cri.go:89] found id: ""
	I1213 08:55:51.159154   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.159161   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:51.159166   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:51.159222   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:51.182690   53550 cri.go:89] found id: ""
	I1213 08:55:51.182704   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.182711   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:51.182716   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:51.182773   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:51.208685   53550 cri.go:89] found id: ""
	I1213 08:55:51.208698   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.208705   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:51.208710   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:51.208766   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:51.233183   53550 cri.go:89] found id: ""
	I1213 08:55:51.233197   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.233204   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:51.233209   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:51.233270   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:51.258042   53550 cri.go:89] found id: ""
	I1213 08:55:51.258069   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.258076   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:51.258081   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:51.258147   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:51.290467   53550 cri.go:89] found id: ""
	I1213 08:55:51.290481   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.290488   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:51.290495   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:51.290566   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:51.323202   53550 cri.go:89] found id: ""
	I1213 08:55:51.323216   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.323223   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:51.323231   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:51.323240   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:51.394188   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:51.394206   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:51.426214   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:51.426230   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:51.485838   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:51.485855   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:51.496565   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:51.496580   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:51.576933   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:51.567900   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.568559   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570248   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570845   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.572406   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:51.567900   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.568559   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570248   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570845   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.572406   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:54.077204   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:54.087800   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:54.087874   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:54.113040   53550 cri.go:89] found id: ""
	I1213 08:55:54.113055   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.113062   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:54.113067   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:54.113124   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:54.138822   53550 cri.go:89] found id: ""
	I1213 08:55:54.138835   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.138842   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:54.138847   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:54.138906   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:54.163439   53550 cri.go:89] found id: ""
	I1213 08:55:54.163452   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.163459   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:54.163465   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:54.163557   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:54.188125   53550 cri.go:89] found id: ""
	I1213 08:55:54.188138   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.188145   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:54.188152   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:54.188208   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:54.212893   53550 cri.go:89] found id: ""
	I1213 08:55:54.212907   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.212914   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:54.212920   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:54.212981   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:54.237373   53550 cri.go:89] found id: ""
	I1213 08:55:54.237386   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.237393   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:54.237399   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:54.237459   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:54.265504   53550 cri.go:89] found id: ""
	I1213 08:55:54.265518   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.265525   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:54.265532   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:54.265542   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:54.333125   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:54.333143   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:54.347402   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:54.347418   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:54.412166   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:54.403644   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.404415   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406100   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406397   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.408104   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:54.403644   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.404415   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406100   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406397   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.408104   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:54.412175   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:54.412187   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:54.480709   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:54.480730   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:57.010334   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:57.021059   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:57.021120   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:57.047281   53550 cri.go:89] found id: ""
	I1213 08:55:57.047294   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.047301   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:57.047306   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:57.047377   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:57.071416   53550 cri.go:89] found id: ""
	I1213 08:55:57.071429   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.071436   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:57.071441   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:57.071498   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:57.101079   53550 cri.go:89] found id: ""
	I1213 08:55:57.101092   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.101104   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:57.101110   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:57.101166   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:57.125577   53550 cri.go:89] found id: ""
	I1213 08:55:57.125591   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.125598   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:57.125603   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:57.125664   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:57.150869   53550 cri.go:89] found id: ""
	I1213 08:55:57.150883   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.150890   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:57.150895   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:57.150952   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:57.175181   53550 cri.go:89] found id: ""
	I1213 08:55:57.175196   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.175203   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:57.175209   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:57.175265   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:57.201951   53550 cri.go:89] found id: ""
	I1213 08:55:57.201964   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.201981   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:57.201989   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:57.202000   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:57.230175   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:57.230191   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:57.289371   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:57.289389   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:57.301801   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:57.301816   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:57.376259   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:57.367385   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.368081   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.369821   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.370486   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.372258   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:57.367385   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.368081   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.369821   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.370486   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.372258   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:57.376279   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:57.376290   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:59.938203   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:59.948941   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:59.949015   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:59.975053   53550 cri.go:89] found id: ""
	I1213 08:55:59.975067   53550 logs.go:282] 0 containers: []
	W1213 08:55:59.975074   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:59.975079   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:59.975140   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:00.036168   53550 cri.go:89] found id: ""
	I1213 08:56:00.036184   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.036198   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:00.036204   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:00.036272   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:00.212433   53550 cri.go:89] found id: ""
	I1213 08:56:00.212448   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.212457   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:00.212463   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:00.212534   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:00.329892   53550 cri.go:89] found id: ""
	I1213 08:56:00.329922   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.329931   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:00.329937   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:00.330147   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:00.418357   53550 cri.go:89] found id: ""
	I1213 08:56:00.418382   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.418390   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:00.418395   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:00.418485   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:00.472022   53550 cri.go:89] found id: ""
	I1213 08:56:00.472038   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.472057   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:00.472063   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:00.472147   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:00.501778   53550 cri.go:89] found id: ""
	I1213 08:56:00.501793   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.501800   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:00.501809   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:00.501821   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:00.514889   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:00.514908   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:00.586263   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:00.576477   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.577506   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.579365   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.580284   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.582096   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:00.576477   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.577506   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.579365   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.580284   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.582096   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:00.586275   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:00.586286   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:00.651709   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:00.651729   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:00.679944   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:00.679961   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:03.240030   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:03.250487   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:03.250564   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:03.276985   53550 cri.go:89] found id: ""
	I1213 08:56:03.276999   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.277006   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:03.277011   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:03.277079   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:03.305874   53550 cri.go:89] found id: ""
	I1213 08:56:03.305887   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.305894   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:03.305900   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:03.305961   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:03.332792   53550 cri.go:89] found id: ""
	I1213 08:56:03.332805   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.332812   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:03.332817   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:03.332875   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:03.359327   53550 cri.go:89] found id: ""
	I1213 08:56:03.359340   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.359347   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:03.359352   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:03.359414   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:03.383789   53550 cri.go:89] found id: ""
	I1213 08:56:03.383802   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.383818   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:03.383823   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:03.383881   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:03.409294   53550 cri.go:89] found id: ""
	I1213 08:56:03.409308   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.409315   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:03.409320   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:03.409380   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:03.433579   53550 cri.go:89] found id: ""
	I1213 08:56:03.433593   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.433600   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:03.433608   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:03.433620   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:03.444272   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:03.444288   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:03.513583   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:03.505953   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.506372   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507548   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507861   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.509300   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:03.505953   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.506372   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507548   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507861   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.509300   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:03.513594   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:03.513605   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:03.576629   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:03.576649   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:03.608162   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:03.608178   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:06.165156   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:06.175029   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:06.175086   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:06.199548   53550 cri.go:89] found id: ""
	I1213 08:56:06.199561   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.199567   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:06.199573   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:06.199630   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:06.223345   53550 cri.go:89] found id: ""
	I1213 08:56:06.223358   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.223365   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:06.223370   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:06.223427   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:06.253772   53550 cri.go:89] found id: ""
	I1213 08:56:06.253785   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.253792   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:06.253797   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:06.253862   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:06.285197   53550 cri.go:89] found id: ""
	I1213 08:56:06.285209   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.285216   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:06.285221   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:06.285287   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:06.311117   53550 cri.go:89] found id: ""
	I1213 08:56:06.311130   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.311137   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:06.311142   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:06.311199   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:06.347101   53550 cri.go:89] found id: ""
	I1213 08:56:06.347115   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.347121   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:06.347134   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:06.347212   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:06.373093   53550 cri.go:89] found id: ""
	I1213 08:56:06.373106   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.373113   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:06.373121   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:06.373131   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:06.432261   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:06.432286   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:06.443840   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:06.443858   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:06.510711   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:06.501971   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.502684   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.504393   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.505195   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.506872   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:06.501971   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.502684   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.504393   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.505195   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.506872   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:06.510722   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:06.510745   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:06.572342   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:06.572360   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:09.099708   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:09.109781   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:09.109837   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:09.134708   53550 cri.go:89] found id: ""
	I1213 08:56:09.134722   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.134729   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:09.134734   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:09.134793   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:09.159277   53550 cri.go:89] found id: ""
	I1213 08:56:09.159291   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.159297   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:09.159302   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:09.159367   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:09.185743   53550 cri.go:89] found id: ""
	I1213 08:56:09.185756   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.185763   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:09.185768   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:09.185827   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:09.209881   53550 cri.go:89] found id: ""
	I1213 08:56:09.209894   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.209901   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:09.209907   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:09.209963   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:09.233078   53550 cri.go:89] found id: ""
	I1213 08:56:09.233091   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.233099   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:09.233104   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:09.233165   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:09.261187   53550 cri.go:89] found id: ""
	I1213 08:56:09.261200   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.261208   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:09.261216   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:09.261274   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:09.303988   53550 cri.go:89] found id: ""
	I1213 08:56:09.304001   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.304008   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:09.304016   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:09.304035   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:09.366963   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:09.366982   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:09.377754   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:09.377770   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:09.445863   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:09.437325   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.437871   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.439698   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.440090   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.442054   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:09.437325   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.437871   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.439698   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.440090   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.442054   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:09.445873   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:09.445884   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:09.507900   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:09.507918   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:12.036492   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:12.046919   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:12.046978   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:12.071196   53550 cri.go:89] found id: ""
	I1213 08:56:12.071211   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.071218   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:12.071223   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:12.071285   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:12.097508   53550 cri.go:89] found id: ""
	I1213 08:56:12.097522   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.097529   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:12.097534   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:12.097591   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:12.122628   53550 cri.go:89] found id: ""
	I1213 08:56:12.122641   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.122649   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:12.122654   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:12.122714   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:12.147292   53550 cri.go:89] found id: ""
	I1213 08:56:12.147306   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.147313   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:12.147318   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:12.147385   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:12.171601   53550 cri.go:89] found id: ""
	I1213 08:56:12.171615   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.171622   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:12.171629   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:12.171685   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:12.195241   53550 cri.go:89] found id: ""
	I1213 08:56:12.195255   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.195272   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:12.195277   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:12.195332   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:12.220835   53550 cri.go:89] found id: ""
	I1213 08:56:12.220849   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.220866   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:12.220874   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:12.220883   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:12.283214   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:12.283232   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:12.322176   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:12.322192   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:12.382990   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:12.383007   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:12.393976   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:12.393993   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:12.454561   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:12.446899   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.447411   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.448884   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.449202   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.450694   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:12.446899   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.447411   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.448884   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.449202   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.450694   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:14.956323   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:14.966379   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:14.966439   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:14.992786   53550 cri.go:89] found id: ""
	I1213 08:56:14.992801   53550 logs.go:282] 0 containers: []
	W1213 08:56:14.992807   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:14.992813   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:14.992876   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:15.028638   53550 cri.go:89] found id: ""
	I1213 08:56:15.028653   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.028660   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:15.028666   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:15.028735   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:15.059274   53550 cri.go:89] found id: ""
	I1213 08:56:15.059288   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.059295   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:15.059301   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:15.059408   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:15.089311   53550 cri.go:89] found id: ""
	I1213 08:56:15.089324   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.089331   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:15.089336   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:15.089401   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:15.118691   53550 cri.go:89] found id: ""
	I1213 08:56:15.118705   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.118712   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:15.118717   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:15.118773   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:15.144494   53550 cri.go:89] found id: ""
	I1213 08:56:15.144507   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.144514   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:15.144519   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:15.144577   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:15.173885   53550 cri.go:89] found id: ""
	I1213 08:56:15.173899   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.173905   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:15.173914   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:15.173925   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:15.236112   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:15.228066   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.228792   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230471   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230772   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.232236   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:15.228066   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.228792   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230471   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230772   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.232236   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:15.236121   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:15.236134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:15.298113   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:15.298131   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:15.342964   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:15.342980   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:15.400545   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:15.400563   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:17.911444   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:17.921343   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:17.921402   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:17.947826   53550 cri.go:89] found id: ""
	I1213 08:56:17.947840   53550 logs.go:282] 0 containers: []
	W1213 08:56:17.947847   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:17.947852   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:17.947908   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:17.971346   53550 cri.go:89] found id: ""
	I1213 08:56:17.971376   53550 logs.go:282] 0 containers: []
	W1213 08:56:17.971383   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:17.971387   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:17.971449   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:17.999271   53550 cri.go:89] found id: ""
	I1213 08:56:17.999285   53550 logs.go:282] 0 containers: []
	W1213 08:56:17.999292   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:17.999298   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:17.999371   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:18.031971   53550 cri.go:89] found id: ""
	I1213 08:56:18.031984   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.031991   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:18.031996   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:18.032058   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:18.057098   53550 cri.go:89] found id: ""
	I1213 08:56:18.057112   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.057119   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:18.057127   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:18.057187   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:18.081981   53550 cri.go:89] found id: ""
	I1213 08:56:18.082007   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.082014   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:18.082021   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:18.082092   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:18.108138   53550 cri.go:89] found id: ""
	I1213 08:56:18.108152   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.108159   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:18.108166   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:18.108179   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:18.118705   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:18.118723   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:18.182232   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:18.173836   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.174533   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176073   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176393   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.177924   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:18.173836   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.174533   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176073   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176393   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.177924   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:18.182242   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:18.182253   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:18.243585   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:18.243606   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:18.292655   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:18.292671   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:20.860353   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:20.870680   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:20.870753   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:20.895485   53550 cri.go:89] found id: ""
	I1213 08:56:20.895499   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.895506   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:20.895532   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:20.895592   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:20.921461   53550 cri.go:89] found id: ""
	I1213 08:56:20.921475   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.921482   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:20.921486   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:20.921545   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:20.946484   53550 cri.go:89] found id: ""
	I1213 08:56:20.946498   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.946507   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:20.946512   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:20.946570   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:20.971723   53550 cri.go:89] found id: ""
	I1213 08:56:20.971737   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.971744   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:20.971749   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:20.971806   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:20.996903   53550 cri.go:89] found id: ""
	I1213 08:56:20.996917   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.996924   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:20.996929   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:20.996987   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:21.025270   53550 cri.go:89] found id: ""
	I1213 08:56:21.025283   53550 logs.go:282] 0 containers: []
	W1213 08:56:21.025290   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:21.025295   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:21.025354   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:21.050984   53550 cri.go:89] found id: ""
	I1213 08:56:21.050998   53550 logs.go:282] 0 containers: []
	W1213 08:56:21.051005   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:21.051013   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:21.051024   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:21.061853   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:21.061867   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:21.130720   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:21.122077   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.122912   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124411   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124796   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.126237   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:21.122077   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.122912   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124411   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124796   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.126237   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:21.130741   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:21.130753   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:21.194629   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:21.194647   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:21.222790   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:21.222806   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:23.780448   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:23.790523   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:23.790584   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:23.815703   53550 cri.go:89] found id: ""
	I1213 08:56:23.815717   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.815724   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:23.815729   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:23.815790   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:23.844047   53550 cri.go:89] found id: ""
	I1213 08:56:23.844062   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.844069   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:23.844074   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:23.844132   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:23.868824   53550 cri.go:89] found id: ""
	I1213 08:56:23.868837   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.868844   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:23.868849   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:23.868908   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:23.893054   53550 cri.go:89] found id: ""
	I1213 08:56:23.893067   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.893084   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:23.893089   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:23.893158   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:23.918102   53550 cri.go:89] found id: ""
	I1213 08:56:23.918115   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.918141   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:23.918146   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:23.918221   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:23.943674   53550 cri.go:89] found id: ""
	I1213 08:56:23.943706   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.943713   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:23.943719   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:23.943780   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:23.969229   53550 cri.go:89] found id: ""
	I1213 08:56:23.969242   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.969250   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:23.969258   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:23.969268   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:24.024433   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:24.024452   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:24.036371   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:24.036394   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:24.106333   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:24.097769   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.098607   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.100363   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.101001   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.102517   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:24.097769   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.098607   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.100363   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.101001   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.102517   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:24.106343   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:24.106354   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:24.169184   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:24.169204   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:26.698614   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:26.708577   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:26.708633   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:26.732922   53550 cri.go:89] found id: ""
	I1213 08:56:26.732936   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.732943   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:26.732948   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:26.733006   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:26.755987   53550 cri.go:89] found id: ""
	I1213 08:56:26.756000   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.756007   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:26.756012   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:26.756070   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:26.780069   53550 cri.go:89] found id: ""
	I1213 08:56:26.780082   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.780089   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:26.780094   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:26.780152   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:26.803904   53550 cri.go:89] found id: ""
	I1213 08:56:26.803916   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.803923   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:26.803928   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:26.803983   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:26.829092   53550 cri.go:89] found id: ""
	I1213 08:56:26.829106   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.829114   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:26.829119   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:26.829177   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:26.853845   53550 cri.go:89] found id: ""
	I1213 08:56:26.853858   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.853865   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:26.853870   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:26.853925   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:26.878415   53550 cri.go:89] found id: ""
	I1213 08:56:26.878428   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.878435   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:26.878443   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:26.878452   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:26.934265   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:26.934282   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:26.945523   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:26.945543   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:27.018637   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:27.009719   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.010496   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012205   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012866   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.014551   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:27.009719   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.010496   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012205   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012866   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.014551   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:27.018647   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:27.018658   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:27.084954   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:27.084972   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:29.613085   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:29.622947   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:29.623004   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:29.646959   53550 cri.go:89] found id: ""
	I1213 08:56:29.646973   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.646980   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:29.646986   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:29.647044   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:29.671745   53550 cri.go:89] found id: ""
	I1213 08:56:29.671759   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.671766   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:29.671771   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:29.671827   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:29.695958   53550 cri.go:89] found id: ""
	I1213 08:56:29.695972   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.695979   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:29.695984   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:29.696042   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:29.720480   53550 cri.go:89] found id: ""
	I1213 08:56:29.720494   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.720501   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:29.720506   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:29.720561   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:29.744988   53550 cri.go:89] found id: ""
	I1213 08:56:29.745001   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.745008   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:29.745013   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:29.745069   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:29.768515   53550 cri.go:89] found id: ""
	I1213 08:56:29.768529   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.768536   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:29.768541   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:29.768600   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:29.792772   53550 cri.go:89] found id: ""
	I1213 08:56:29.792791   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.792798   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:29.792806   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:29.792815   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:29.848125   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:29.848143   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:29.859353   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:29.859369   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:29.922416   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:29.914324   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.914996   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.915957   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.916631   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.918102   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:29.914324   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.914996   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.915957   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.916631   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.918102   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:29.922426   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:29.922438   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:29.991606   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:29.991633   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:32.539218   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:32.551358   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:32.551433   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:32.579755   53550 cri.go:89] found id: ""
	I1213 08:56:32.579769   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.579776   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:32.579782   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:32.579840   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:32.606298   53550 cri.go:89] found id: ""
	I1213 08:56:32.606312   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.606319   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:32.606325   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:32.606386   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:32.631992   53550 cri.go:89] found id: ""
	I1213 08:56:32.632006   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.632023   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:32.632028   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:32.632086   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:32.663992   53550 cri.go:89] found id: ""
	I1213 08:56:32.664006   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.664013   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:32.664019   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:32.664079   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:32.688738   53550 cri.go:89] found id: ""
	I1213 08:56:32.688752   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.688759   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:32.688764   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:32.688824   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:32.714559   53550 cri.go:89] found id: ""
	I1213 08:56:32.714573   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.714590   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:32.714596   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:32.714663   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:32.741559   53550 cri.go:89] found id: ""
	I1213 08:56:32.741573   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.741579   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:32.741587   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:32.741597   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:32.800820   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:32.800838   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:32.811825   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:32.811840   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:32.885502   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:32.876782   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.877261   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.879781   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.880103   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.881563   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:32.876782   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.877261   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.879781   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.880103   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.881563   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:32.885513   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:32.885525   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:32.948272   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:32.948291   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:35.480322   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:35.490281   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:35.490342   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:35.514866   53550 cri.go:89] found id: ""
	I1213 08:56:35.514880   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.514891   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:35.514896   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:35.514956   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:35.547423   53550 cri.go:89] found id: ""
	I1213 08:56:35.547436   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.547443   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:35.547449   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:35.547529   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:35.576485   53550 cri.go:89] found id: ""
	I1213 08:56:35.576499   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.576506   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:35.576511   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:35.576569   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:35.602583   53550 cri.go:89] found id: ""
	I1213 08:56:35.602597   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.602604   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:35.602610   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:35.602671   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:35.628894   53550 cri.go:89] found id: ""
	I1213 08:56:35.628908   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.628915   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:35.628920   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:35.628983   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:35.657754   53550 cri.go:89] found id: ""
	I1213 08:56:35.657768   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.657775   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:35.657780   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:35.657838   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:35.682178   53550 cri.go:89] found id: ""
	I1213 08:56:35.682192   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.682198   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:35.682207   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:35.682218   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:35.692814   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:35.692830   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:35.755108   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:35.747202   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.747982   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749544   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749858   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.751344   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:35.747202   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.747982   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749544   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749858   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.751344   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:35.755119   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:35.755130   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:35.819728   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:35.819749   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:35.848015   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:35.848031   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:38.404654   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:38.414683   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:38.414742   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:38.441126   53550 cri.go:89] found id: ""
	I1213 08:56:38.441140   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.441147   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:38.441152   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:38.441214   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:38.465511   53550 cri.go:89] found id: ""
	I1213 08:56:38.465524   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.465545   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:38.465550   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:38.465606   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:38.489339   53550 cri.go:89] found id: ""
	I1213 08:56:38.489353   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.489359   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:38.489364   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:38.489418   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:38.513685   53550 cri.go:89] found id: ""
	I1213 08:56:38.513699   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.513706   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:38.513711   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:38.513768   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:38.542115   53550 cri.go:89] found id: ""
	I1213 08:56:38.542128   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.542135   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:38.542140   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:38.542204   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:38.569759   53550 cri.go:89] found id: ""
	I1213 08:56:38.569772   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.569778   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:38.569784   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:38.569842   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:38.596740   53550 cri.go:89] found id: ""
	I1213 08:56:38.596754   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.596761   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:38.596769   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:38.596780   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:38.654316   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:38.654335   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:38.665035   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:38.665050   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:38.729308   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:38.720643   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.721501   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723232   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723577   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.725294   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:38.720643   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.721501   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723232   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723577   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.725294   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:38.729317   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:38.729330   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:38.790889   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:38.790908   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:41.323859   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:41.335168   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:41.335228   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:41.361007   53550 cri.go:89] found id: ""
	I1213 08:56:41.361021   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.361028   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:41.361033   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:41.361090   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:41.385773   53550 cri.go:89] found id: ""
	I1213 08:56:41.385787   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.385794   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:41.385799   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:41.385857   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:41.415146   53550 cri.go:89] found id: ""
	I1213 08:56:41.415160   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.415174   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:41.415179   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:41.415235   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:41.441108   53550 cri.go:89] found id: ""
	I1213 08:56:41.441122   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.441129   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:41.441134   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:41.441190   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:41.475987   53550 cri.go:89] found id: ""
	I1213 08:56:41.476001   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.476008   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:41.476014   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:41.476073   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:41.499775   53550 cri.go:89] found id: ""
	I1213 08:56:41.499789   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.499796   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:41.499801   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:41.499861   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:41.528901   53550 cri.go:89] found id: ""
	I1213 08:56:41.528914   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.528931   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:41.528939   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:41.528956   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:41.589661   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:41.589678   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:41.602123   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:41.602138   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:41.667706   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:41.659176   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.659929   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.661608   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.662073   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.663743   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:41.659176   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.659929   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.661608   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.662073   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.663743   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:41.667715   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:41.667735   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:41.730253   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:41.730270   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:44.257671   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:44.269222   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:44.269293   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:44.294398   53550 cri.go:89] found id: ""
	I1213 08:56:44.294412   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.294419   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:44.294423   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:44.294484   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:44.319070   53550 cri.go:89] found id: ""
	I1213 08:56:44.319084   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.319092   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:44.319097   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:44.319155   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:44.343392   53550 cri.go:89] found id: ""
	I1213 08:56:44.343405   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.343420   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:44.343425   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:44.343485   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:44.367894   53550 cri.go:89] found id: ""
	I1213 08:56:44.367909   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.367924   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:44.367929   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:44.367993   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:44.393473   53550 cri.go:89] found id: ""
	I1213 08:56:44.393487   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.393505   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:44.393511   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:44.393579   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:44.419150   53550 cri.go:89] found id: ""
	I1213 08:56:44.419164   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.419171   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:44.419177   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:44.419236   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:44.445826   53550 cri.go:89] found id: ""
	I1213 08:56:44.445839   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.445846   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:44.445854   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:44.445864   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:44.473670   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:44.473686   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:44.532419   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:44.532439   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:44.545059   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:44.545075   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:44.621942   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:44.613047   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.613768   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.615594   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.616292   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.618008   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:44.613047   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.613768   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.615594   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.616292   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.618008   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:44.621960   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:44.621970   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:47.187660   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:47.197939   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:47.197999   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:47.223309   53550 cri.go:89] found id: ""
	I1213 08:56:47.223328   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.223335   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:47.223341   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:47.223404   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:47.248945   53550 cri.go:89] found id: ""
	I1213 08:56:47.248958   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.248965   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:47.248971   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:47.249030   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:47.277058   53550 cri.go:89] found id: ""
	I1213 08:56:47.277072   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.277079   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:47.277084   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:47.277141   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:47.301116   53550 cri.go:89] found id: ""
	I1213 08:56:47.301130   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.301137   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:47.301151   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:47.301209   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:47.323965   53550 cri.go:89] found id: ""
	I1213 08:56:47.323979   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.323987   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:47.323992   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:47.324050   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:47.348999   53550 cri.go:89] found id: ""
	I1213 08:56:47.349019   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.349027   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:47.349032   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:47.349092   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:47.373783   53550 cri.go:89] found id: ""
	I1213 08:56:47.373797   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.373803   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:47.373811   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:47.373820   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:47.429021   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:47.429039   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:47.439785   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:47.439801   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:47.500829   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:47.492613   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.493257   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.494833   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.495318   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.496858   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:47.492613   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.493257   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.494833   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.495318   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.496858   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:47.500840   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:47.500850   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:47.568111   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:47.568130   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:50.110119   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:50.120537   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:50.120602   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:50.148966   53550 cri.go:89] found id: ""
	I1213 08:56:50.148980   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.148986   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:50.148991   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:50.149046   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:50.177907   53550 cri.go:89] found id: ""
	I1213 08:56:50.177921   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.177928   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:50.177933   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:50.177996   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:50.203131   53550 cri.go:89] found id: ""
	I1213 08:56:50.203144   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.203151   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:50.203155   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:50.203262   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:50.226237   53550 cri.go:89] found id: ""
	I1213 08:56:50.226257   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.226264   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:50.226269   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:50.226327   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:50.253758   53550 cri.go:89] found id: ""
	I1213 08:56:50.253773   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.253779   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:50.253784   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:50.253843   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:50.278302   53550 cri.go:89] found id: ""
	I1213 08:56:50.278315   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.278322   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:50.278327   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:50.278392   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:50.309556   53550 cri.go:89] found id: ""
	I1213 08:56:50.309569   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.309576   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:50.309584   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:50.309594   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:50.320066   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:50.320081   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:50.382949   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:50.374268   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.375072   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.376878   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.377528   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.379007   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:50.374268   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.375072   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.376878   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.377528   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.379007   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:50.382958   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:50.382969   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:50.444351   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:50.444370   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:50.470781   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:50.470797   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:53.028628   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:53.039130   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:53.039200   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:53.063996   53550 cri.go:89] found id: ""
	I1213 08:56:53.064009   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.064015   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:53.064020   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:53.064076   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:53.088275   53550 cri.go:89] found id: ""
	I1213 08:56:53.088289   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.088296   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:53.088300   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:53.088358   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:53.111773   53550 cri.go:89] found id: ""
	I1213 08:56:53.111786   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.111793   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:53.111808   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:53.111887   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:53.137026   53550 cri.go:89] found id: ""
	I1213 08:56:53.137040   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.137046   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:53.137051   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:53.137107   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:53.160335   53550 cri.go:89] found id: ""
	I1213 08:56:53.160349   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.160356   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:53.160361   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:53.160416   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:53.184713   53550 cri.go:89] found id: ""
	I1213 08:56:53.184726   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.184733   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:53.184738   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:53.184795   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:53.208847   53550 cri.go:89] found id: ""
	I1213 08:56:53.208861   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.208868   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:53.208875   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:53.208886   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:53.266985   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:53.267004   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:53.277388   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:53.277404   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:53.340191   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:53.332012   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.332685   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334293   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334789   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.336280   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:53.332012   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.332685   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334293   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334789   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.336280   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:53.340200   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:53.340211   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:53.401706   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:53.401724   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:55.928555   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:55.939550   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:55.939616   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:55.965405   53550 cri.go:89] found id: ""
	I1213 08:56:55.965419   53550 logs.go:282] 0 containers: []
	W1213 08:56:55.965426   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:55.965431   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:55.965498   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:55.992150   53550 cri.go:89] found id: ""
	I1213 08:56:55.992164   53550 logs.go:282] 0 containers: []
	W1213 08:56:55.992171   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:55.992175   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:55.992230   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:56.016602   53550 cri.go:89] found id: ""
	I1213 08:56:56.016616   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.016623   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:56.016628   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:56.016689   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:56.042580   53550 cri.go:89] found id: ""
	I1213 08:56:56.042593   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.042600   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:56.042605   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:56.042662   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:56.068761   53550 cri.go:89] found id: ""
	I1213 08:56:56.068775   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.068782   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:56.068787   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:56.068848   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:56.093033   53550 cri.go:89] found id: ""
	I1213 08:56:56.093048   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.093055   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:56.093061   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:56.093126   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:56.117228   53550 cri.go:89] found id: ""
	I1213 08:56:56.117241   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.117248   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:56.117255   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:56.117266   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:56.176992   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:56.177011   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:56.188270   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:56.188285   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:56.253019   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:56.245144   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.245941   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247417   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247852   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.249325   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:56.245144   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.245941   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247417   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247852   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.249325   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:56.253029   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:56.253039   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:56.317674   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:56.317696   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:58.848619   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:58.859053   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:58.859112   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:58.885409   53550 cri.go:89] found id: ""
	I1213 08:56:58.885423   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.885430   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:58.885436   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:58.885494   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:58.910222   53550 cri.go:89] found id: ""
	I1213 08:56:58.910236   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.910243   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:58.910249   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:58.910325   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:58.934888   53550 cri.go:89] found id: ""
	I1213 08:56:58.934902   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.934909   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:58.934914   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:58.934973   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:58.959400   53550 cri.go:89] found id: ""
	I1213 08:56:58.959413   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.959420   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:58.959426   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:58.959487   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:58.983607   53550 cri.go:89] found id: ""
	I1213 08:56:58.983621   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.983627   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:58.983651   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:58.983710   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:59.013864   53550 cri.go:89] found id: ""
	I1213 08:56:59.013879   53550 logs.go:282] 0 containers: []
	W1213 08:56:59.013886   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:59.013892   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:59.013953   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:59.039411   53550 cri.go:89] found id: ""
	I1213 08:56:59.039425   53550 logs.go:282] 0 containers: []
	W1213 08:56:59.039432   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:59.039475   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:59.039485   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:59.096733   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:59.096753   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:59.107622   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:59.107636   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:59.174925   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:59.166857   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.167747   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169305   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169619   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.171088   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:59.166857   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.167747   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169305   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169619   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.171088   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:59.174934   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:59.174947   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:59.241043   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:59.241063   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:01.772758   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:01.783635   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:01.783701   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:01.809991   53550 cri.go:89] found id: ""
	I1213 08:57:01.810006   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.810012   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:01.810017   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:01.810077   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:01.839186   53550 cri.go:89] found id: ""
	I1213 08:57:01.839200   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.839207   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:01.839212   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:01.839280   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:01.863706   53550 cri.go:89] found id: ""
	I1213 08:57:01.863720   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.863727   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:01.863733   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:01.863802   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:01.888840   53550 cri.go:89] found id: ""
	I1213 08:57:01.888853   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.888866   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:01.888871   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:01.888931   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:01.915920   53550 cri.go:89] found id: ""
	I1213 08:57:01.915933   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.915940   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:01.915944   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:01.916002   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:01.945751   53550 cri.go:89] found id: ""
	I1213 08:57:01.945765   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.945771   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:01.945776   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:01.945845   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:01.970743   53550 cri.go:89] found id: ""
	I1213 08:57:01.970757   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.970765   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:01.970773   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:01.970782   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:02.026866   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:02.026889   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:02.038522   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:02.038539   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:02.102348   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:02.094261   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.095133   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.096614   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.097026   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.098541   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:02.094261   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.095133   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.096614   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.097026   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.098541   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:02.102361   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:02.102375   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:02.169043   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:02.169063   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:04.696543   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:04.706341   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:04.706437   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:04.731229   53550 cri.go:89] found id: ""
	I1213 08:57:04.731243   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.731250   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:04.731255   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:04.731313   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:04.755649   53550 cri.go:89] found id: ""
	I1213 08:57:04.755664   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.755671   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:04.755675   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:04.755731   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:04.792911   53550 cri.go:89] found id: ""
	I1213 08:57:04.792925   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.792932   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:04.792937   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:04.793004   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:04.819883   53550 cri.go:89] found id: ""
	I1213 08:57:04.819898   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.819905   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:04.819910   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:04.819977   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:04.849837   53550 cri.go:89] found id: ""
	I1213 08:57:04.849851   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.849858   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:04.849863   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:04.849918   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:04.874858   53550 cri.go:89] found id: ""
	I1213 08:57:04.874882   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.874890   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:04.874895   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:04.874960   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:04.903606   53550 cri.go:89] found id: ""
	I1213 08:57:04.903627   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.903634   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:04.903643   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:04.903654   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:04.974645   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:04.965728   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.966550   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968066   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968742   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.970242   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:04.965728   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.966550   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968066   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968742   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.970242   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:04.974655   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:04.974665   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:05.042463   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:05.042483   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:05.073448   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:05.073463   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:05.138728   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:05.138751   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:07.650339   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:07.660396   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:07.660456   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:07.683873   53550 cri.go:89] found id: ""
	I1213 08:57:07.683886   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.683893   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:07.683898   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:07.683955   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:07.708331   53550 cri.go:89] found id: ""
	I1213 08:57:07.708345   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.708352   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:07.708357   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:07.708413   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:07.732899   53550 cri.go:89] found id: ""
	I1213 08:57:07.732913   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.732920   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:07.732925   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:07.732984   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:07.757287   53550 cri.go:89] found id: ""
	I1213 08:57:07.757301   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.757308   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:07.757313   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:07.757384   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:07.795374   53550 cri.go:89] found id: ""
	I1213 08:57:07.795387   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.795394   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:07.795399   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:07.795464   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:07.825153   53550 cri.go:89] found id: ""
	I1213 08:57:07.825167   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.825173   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:07.825182   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:07.825237   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:07.852307   53550 cri.go:89] found id: ""
	I1213 08:57:07.852321   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.852327   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:07.852336   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:07.852345   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:07.880059   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:07.880077   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:07.939241   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:07.939258   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:07.949880   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:07.949895   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:08.020565   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:08.011681   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.012563   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014158   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014873   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.016465   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:08.011681   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.012563   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014158   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014873   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.016465   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:08.020576   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:08.020587   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:10.587648   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:10.597489   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:10.597549   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:10.628550   53550 cri.go:89] found id: ""
	I1213 08:57:10.628564   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.628571   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:10.628579   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:10.628636   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:10.652715   53550 cri.go:89] found id: ""
	I1213 08:57:10.652728   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.652735   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:10.652740   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:10.652800   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:10.676571   53550 cri.go:89] found id: ""
	I1213 08:57:10.676585   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.676591   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:10.676596   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:10.676656   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:10.701425   53550 cri.go:89] found id: ""
	I1213 08:57:10.701439   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.701446   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:10.701451   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:10.701512   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:10.725031   53550 cri.go:89] found id: ""
	I1213 08:57:10.725044   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.725051   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:10.725056   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:10.725115   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:10.748783   53550 cri.go:89] found id: ""
	I1213 08:57:10.748796   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.748803   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:10.748808   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:10.748865   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:10.782351   53550 cri.go:89] found id: ""
	I1213 08:57:10.782364   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.782371   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:10.782379   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:10.782389   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:10.795735   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:10.795751   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:10.871365   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:10.863538   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.863981   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865416   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865845   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.867361   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:10.863538   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.863981   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865416   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865845   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.867361   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:10.871375   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:10.871386   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:10.934169   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:10.934186   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:10.960579   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:10.960595   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:13.522265   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:13.532592   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:13.532651   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:13.557594   53550 cri.go:89] found id: ""
	I1213 08:57:13.557607   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.557614   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:13.557622   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:13.557678   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:13.582015   53550 cri.go:89] found id: ""
	I1213 08:57:13.582029   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.582036   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:13.582041   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:13.582101   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:13.606414   53550 cri.go:89] found id: ""
	I1213 08:57:13.606430   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.606437   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:13.606442   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:13.606501   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:13.633258   53550 cri.go:89] found id: ""
	I1213 08:57:13.633271   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.633278   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:13.633283   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:13.633347   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:13.657138   53550 cri.go:89] found id: ""
	I1213 08:57:13.657151   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.657158   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:13.657163   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:13.657220   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:13.680740   53550 cri.go:89] found id: ""
	I1213 08:57:13.680754   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.680760   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:13.680766   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:13.680821   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:13.704953   53550 cri.go:89] found id: ""
	I1213 08:57:13.704966   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.704973   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:13.704981   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:13.704992   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:13.770673   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:13.760145   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.760885   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762410   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762907   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.764469   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:13.760145   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.760885   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762410   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762907   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.764469   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:13.770683   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:13.770696   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:13.840896   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:13.840915   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:13.870203   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:13.870219   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:13.927703   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:13.927721   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:16.440308   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:16.450569   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:16.450632   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:16.477483   53550 cri.go:89] found id: ""
	I1213 08:57:16.477497   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.477503   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:16.477508   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:16.477565   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:16.502333   53550 cri.go:89] found id: ""
	I1213 08:57:16.502347   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.502354   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:16.502369   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:16.502428   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:16.532266   53550 cri.go:89] found id: ""
	I1213 08:57:16.532282   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.532288   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:16.532293   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:16.532350   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:16.560396   53550 cri.go:89] found id: ""
	I1213 08:57:16.560410   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.560417   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:16.560422   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:16.560478   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:16.588855   53550 cri.go:89] found id: ""
	I1213 08:57:16.588868   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.588875   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:16.588881   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:16.588940   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:16.613011   53550 cri.go:89] found id: ""
	I1213 08:57:16.613024   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.613031   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:16.613036   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:16.613093   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:16.637627   53550 cri.go:89] found id: ""
	I1213 08:57:16.637641   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.637648   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:16.637655   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:16.637665   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:16.694489   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:16.694506   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:16.705456   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:16.705471   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:16.774554   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:16.762987   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.763565   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765176   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765796   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.767462   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:16.762987   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.763565   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765176   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765796   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.767462   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:16.774565   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:16.774577   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:16.840799   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:16.840818   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:19.370819   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:19.380996   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:19.381057   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:19.405680   53550 cri.go:89] found id: ""
	I1213 08:57:19.405694   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.405701   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:19.405707   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:19.405765   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:19.434562   53550 cri.go:89] found id: ""
	I1213 08:57:19.434575   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.434583   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:19.434588   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:19.434645   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:19.460752   53550 cri.go:89] found id: ""
	I1213 08:57:19.460765   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.460772   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:19.460777   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:19.460833   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:19.486494   53550 cri.go:89] found id: ""
	I1213 08:57:19.486508   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.486515   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:19.486520   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:19.486580   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:19.515809   53550 cri.go:89] found id: ""
	I1213 08:57:19.515824   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.515830   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:19.515835   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:19.515892   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:19.541206   53550 cri.go:89] found id: ""
	I1213 08:57:19.541219   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.541226   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:19.541231   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:19.541298   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:19.565992   53550 cri.go:89] found id: ""
	I1213 08:57:19.566005   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.566012   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:19.566020   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:19.566030   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:19.593821   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:19.593836   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:19.650142   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:19.650161   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:19.660963   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:19.660978   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:19.726595   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:19.718869   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.719407   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.720893   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.721354   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.722818   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:19.718869   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.719407   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.720893   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.721354   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.722818   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:19.726604   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:19.726615   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:22.290630   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:22.300536   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:22.300595   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:22.323650   53550 cri.go:89] found id: ""
	I1213 08:57:22.323663   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.323670   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:22.323675   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:22.323738   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:22.346879   53550 cri.go:89] found id: ""
	I1213 08:57:22.346892   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.346899   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:22.346904   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:22.346958   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:22.370613   53550 cri.go:89] found id: ""
	I1213 08:57:22.370627   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.370633   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:22.370638   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:22.370695   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:22.397037   53550 cri.go:89] found id: ""
	I1213 08:57:22.397051   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.397057   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:22.397062   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:22.397120   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:22.420786   53550 cri.go:89] found id: ""
	I1213 08:57:22.420799   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.420806   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:22.420811   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:22.420873   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:22.445029   53550 cri.go:89] found id: ""
	I1213 08:57:22.445043   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.445050   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:22.445056   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:22.445112   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:22.468674   53550 cri.go:89] found id: ""
	I1213 08:57:22.468688   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.468694   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:22.468702   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:22.468712   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:22.495304   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:22.495322   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:22.552462   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:22.552479   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:22.562826   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:22.562841   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:22.622604   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:22.615033   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.615580   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.616785   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.617360   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.618835   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:22.615033   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.615580   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.616785   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.617360   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.618835   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:22.622614   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:22.622625   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:25.187376   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:25.197281   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:25.197340   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:25.224829   53550 cri.go:89] found id: ""
	I1213 08:57:25.224843   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.224850   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:25.224855   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:25.224914   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:25.253288   53550 cri.go:89] found id: ""
	I1213 08:57:25.253303   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.253310   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:25.253315   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:25.253371   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:25.277253   53550 cri.go:89] found id: ""
	I1213 08:57:25.277267   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.277274   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:25.277279   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:25.277338   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:25.303815   53550 cri.go:89] found id: ""
	I1213 08:57:25.303828   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.303835   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:25.303840   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:25.303901   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:25.328041   53550 cri.go:89] found id: ""
	I1213 08:57:25.328054   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.328060   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:25.328065   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:25.328123   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:25.356334   53550 cri.go:89] found id: ""
	I1213 08:57:25.356348   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.356355   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:25.356369   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:25.356424   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:25.380096   53550 cri.go:89] found id: ""
	I1213 08:57:25.380110   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.380116   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:25.380124   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:25.380134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:25.439426   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:25.439444   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:25.449905   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:25.449921   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:25.512900   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:25.504371   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.504993   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.506701   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.507271   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.508895   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:25.504371   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.504993   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.506701   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.507271   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.508895   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:25.512910   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:25.512920   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:25.575756   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:25.575775   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:28.103479   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:28.113820   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:28.113880   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:28.139011   53550 cri.go:89] found id: ""
	I1213 08:57:28.139026   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.139033   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:28.139038   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:28.139097   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:28.169622   53550 cri.go:89] found id: ""
	I1213 08:57:28.169635   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.169642   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:28.169647   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:28.169707   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:28.197421   53550 cri.go:89] found id: ""
	I1213 08:57:28.197436   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.197443   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:28.197448   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:28.197504   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:28.221931   53550 cri.go:89] found id: ""
	I1213 08:57:28.221945   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.221952   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:28.221957   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:28.222019   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:28.245719   53550 cri.go:89] found id: ""
	I1213 08:57:28.245732   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.245739   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:28.245744   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:28.245801   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:28.273087   53550 cri.go:89] found id: ""
	I1213 08:57:28.273101   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.273108   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:28.273113   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:28.273170   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:28.299359   53550 cri.go:89] found id: ""
	I1213 08:57:28.299372   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.299379   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:28.299388   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:28.299398   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:28.355178   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:28.355195   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:28.365905   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:28.365921   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:28.430892   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:28.422372   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.423034   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.424584   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.425047   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.426553   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:28.422372   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.423034   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.424584   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.425047   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.426553   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:28.430909   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:28.430919   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:28.493985   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:28.494008   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:31.028636   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:31.039540   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:31.039600   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:31.067565   53550 cri.go:89] found id: ""
	I1213 08:57:31.067579   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.067586   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:31.067591   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:31.067649   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:31.103967   53550 cri.go:89] found id: ""
	I1213 08:57:31.103994   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.104001   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:31.104006   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:31.104072   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:31.128428   53550 cri.go:89] found id: ""
	I1213 08:57:31.128455   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.128462   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:31.128467   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:31.128535   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:31.157837   53550 cri.go:89] found id: ""
	I1213 08:57:31.157851   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.157857   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:31.157864   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:31.157920   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:31.182139   53550 cri.go:89] found id: ""
	I1213 08:57:31.182153   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.182160   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:31.182165   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:31.182221   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:31.206203   53550 cri.go:89] found id: ""
	I1213 08:57:31.206217   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.206224   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:31.206229   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:31.206284   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:31.230290   53550 cri.go:89] found id: ""
	I1213 08:57:31.230304   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.230311   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:31.230319   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:31.230335   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:31.240760   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:31.240775   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:31.306114   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:31.297892   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.298559   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300121   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300425   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.301865   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:31.297892   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.298559   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300121   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300425   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.301865   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:31.306123   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:31.306134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:31.372771   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:31.372790   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:31.402327   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:31.402342   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:33.959197   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:33.969353   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:33.969420   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:33.994169   53550 cri.go:89] found id: ""
	I1213 08:57:33.994183   53550 logs.go:282] 0 containers: []
	W1213 08:57:33.994190   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:33.994195   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:33.994253   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:34.022338   53550 cri.go:89] found id: ""
	I1213 08:57:34.022367   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.022375   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:34.022380   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:34.022457   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:34.055485   53550 cri.go:89] found id: ""
	I1213 08:57:34.055547   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.055563   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:34.055569   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:34.055645   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:34.085397   53550 cri.go:89] found id: ""
	I1213 08:57:34.085411   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.085419   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:34.085424   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:34.085487   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:34.112540   53550 cri.go:89] found id: ""
	I1213 08:57:34.112553   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.112561   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:34.112566   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:34.112622   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:34.137911   53550 cri.go:89] found id: ""
	I1213 08:57:34.137934   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.137942   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:34.137947   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:34.138013   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:34.165184   53550 cri.go:89] found id: ""
	I1213 08:57:34.165197   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.165204   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:34.165213   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:34.165224   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:34.221937   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:34.221954   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:34.232900   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:34.232925   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:34.299398   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:34.291492   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.291917   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.293598   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.294066   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.295548   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:34.291492   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.291917   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.293598   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.294066   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.295548   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:34.299409   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:34.299422   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:34.362086   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:34.362104   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:36.894643   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:36.904509   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:36.904571   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:36.928971   53550 cri.go:89] found id: ""
	I1213 08:57:36.928986   53550 logs.go:282] 0 containers: []
	W1213 08:57:36.928993   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:36.928998   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:36.929055   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:36.963924   53550 cri.go:89] found id: ""
	I1213 08:57:36.963938   53550 logs.go:282] 0 containers: []
	W1213 08:57:36.963945   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:36.963956   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:36.964015   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:36.989352   53550 cri.go:89] found id: ""
	I1213 08:57:36.989366   53550 logs.go:282] 0 containers: []
	W1213 08:57:36.989373   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:36.989378   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:36.989435   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:37.022947   53550 cri.go:89] found id: ""
	I1213 08:57:37.022973   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.022982   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:37.022987   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:37.023065   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:37.058627   53550 cri.go:89] found id: ""
	I1213 08:57:37.058642   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.058649   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:37.058654   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:37.058711   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:37.093026   53550 cri.go:89] found id: ""
	I1213 08:57:37.093047   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.093054   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:37.093059   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:37.093127   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:37.119099   53550 cri.go:89] found id: ""
	I1213 08:57:37.119113   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.119120   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:37.119127   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:37.119142   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:37.129746   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:37.129770   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:37.192251   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:37.184204   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.185001   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186648   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186953   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.188434   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:37.184204   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.185001   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186648   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186953   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.188434   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:37.192263   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:37.192274   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:37.258678   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:37.258697   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:37.286406   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:37.286421   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:39.843274   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:39.853155   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:39.853221   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:39.880613   53550 cri.go:89] found id: ""
	I1213 08:57:39.880627   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.880634   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:39.880639   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:39.880695   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:39.908166   53550 cri.go:89] found id: ""
	I1213 08:57:39.908179   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.908191   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:39.908197   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:39.908255   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:39.931780   53550 cri.go:89] found id: ""
	I1213 08:57:39.931803   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.931811   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:39.931816   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:39.931885   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:39.959597   53550 cri.go:89] found id: ""
	I1213 08:57:39.959610   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.959617   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:39.959622   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:39.959678   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:39.987876   53550 cri.go:89] found id: ""
	I1213 08:57:39.987889   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.987896   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:39.987901   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:39.987955   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:40.032588   53550 cri.go:89] found id: ""
	I1213 08:57:40.032603   53550 logs.go:282] 0 containers: []
	W1213 08:57:40.032610   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:40.032615   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:40.032675   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:40.061908   53550 cri.go:89] found id: ""
	I1213 08:57:40.061922   53550 logs.go:282] 0 containers: []
	W1213 08:57:40.061929   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:40.061937   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:40.061947   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:40.126971   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:40.126990   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:40.143091   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:40.143107   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:40.207107   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:40.198855   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.199548   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201367   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201930   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.203470   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:40.198855   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.199548   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201367   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201930   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.203470   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:40.207117   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:40.207127   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:40.276818   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:40.276842   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:42.806068   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:42.816147   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:42.816212   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:42.844268   53550 cri.go:89] found id: ""
	I1213 08:57:42.844281   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.844288   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:42.844294   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:42.844353   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:42.869114   53550 cri.go:89] found id: ""
	I1213 08:57:42.869127   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.869134   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:42.869139   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:42.869195   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:42.892972   53550 cri.go:89] found id: ""
	I1213 08:57:42.892986   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.892993   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:42.892998   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:42.893072   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:42.916620   53550 cri.go:89] found id: ""
	I1213 08:57:42.916633   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.916640   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:42.916646   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:42.916702   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:42.940313   53550 cri.go:89] found id: ""
	I1213 08:57:42.940327   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.940334   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:42.940339   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:42.940394   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:42.965365   53550 cri.go:89] found id: ""
	I1213 08:57:42.965379   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.965386   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:42.965391   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:42.965451   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:42.990702   53550 cri.go:89] found id: ""
	I1213 08:57:42.990715   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.990722   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:42.990729   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:42.990742   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:43.048989   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:43.049008   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:43.061818   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:43.061836   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:43.129375   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:43.121071   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.121717   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.123392   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.124082   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.125650   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:43.121071   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.121717   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.123392   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.124082   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.125650   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:43.129386   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:43.129396   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:43.191354   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:43.191373   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:45.723775   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:45.733853   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:45.733913   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:45.765626   53550 cri.go:89] found id: ""
	I1213 08:57:45.765639   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.765646   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:45.765652   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:45.765713   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:45.793721   53550 cri.go:89] found id: ""
	I1213 08:57:45.793734   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.793741   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:45.793746   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:45.793802   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:45.822307   53550 cri.go:89] found id: ""
	I1213 08:57:45.822320   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.822341   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:45.822347   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:45.822411   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:45.851368   53550 cri.go:89] found id: ""
	I1213 08:57:45.851382   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.851390   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:45.851395   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:45.851454   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:45.877295   53550 cri.go:89] found id: ""
	I1213 08:57:45.877308   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.877321   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:45.877326   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:45.877382   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:45.905661   53550 cri.go:89] found id: ""
	I1213 08:57:45.905674   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.905681   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:45.905686   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:45.905745   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:45.934028   53550 cri.go:89] found id: ""
	I1213 08:57:45.934042   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.934050   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:45.934058   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:45.934068   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:45.962148   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:45.962164   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:46.017986   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:46.018005   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:46.031923   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:46.031939   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:46.106367   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:46.098570   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.099183   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.100789   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.101326   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.102477   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:46.098570   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.099183   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.100789   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.101326   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.102477   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:46.106379   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:46.106389   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:48.670805   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:48.680874   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:48.680935   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:48.704942   53550 cri.go:89] found id: ""
	I1213 08:57:48.704955   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.704962   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:48.704968   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:48.705029   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:48.729965   53550 cri.go:89] found id: ""
	I1213 08:57:48.729979   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.729986   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:48.729991   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:48.730048   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:48.754712   53550 cri.go:89] found id: ""
	I1213 08:57:48.754726   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.754733   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:48.754739   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:48.754798   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:48.786991   53550 cri.go:89] found id: ""
	I1213 08:57:48.787014   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.787021   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:48.787026   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:48.787082   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:48.812918   53550 cri.go:89] found id: ""
	I1213 08:57:48.812932   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.812939   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:48.812943   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:48.813010   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:48.841512   53550 cri.go:89] found id: ""
	I1213 08:57:48.841525   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.841533   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:48.841538   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:48.841597   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:48.866500   53550 cri.go:89] found id: ""
	I1213 08:57:48.866514   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.866521   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:48.866529   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:48.866539   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:48.922975   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:48.922993   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:48.933525   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:48.933540   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:48.995831   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:48.987715   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.988587   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990160   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990461   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.991987   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:48.987715   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.988587   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990160   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990461   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.991987   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:48.995841   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:48.995852   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:49.061866   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:49.061885   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:51.594845   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:51.606962   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:51.607021   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:51.630371   53550 cri.go:89] found id: ""
	I1213 08:57:51.630390   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.630397   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:51.630402   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:51.630456   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:51.655753   53550 cri.go:89] found id: ""
	I1213 08:57:51.655768   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.655775   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:51.655780   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:51.655835   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:51.680116   53550 cri.go:89] found id: ""
	I1213 08:57:51.680130   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.680136   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:51.680142   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:51.680199   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:51.703715   53550 cri.go:89] found id: ""
	I1213 08:57:51.703728   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.703734   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:51.703739   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:51.703798   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:51.728242   53550 cri.go:89] found id: ""
	I1213 08:57:51.728257   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.728263   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:51.728268   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:51.728334   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:51.752764   53550 cri.go:89] found id: ""
	I1213 08:57:51.752777   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.752783   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:51.752788   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:51.752845   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:51.776542   53550 cri.go:89] found id: ""
	I1213 08:57:51.776556   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.776562   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:51.776570   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:51.776583   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:51.809113   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:51.809129   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:51.868930   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:51.868948   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:51.879570   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:51.879594   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:51.948757   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:51.940427   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.940848   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.942781   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.943205   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.944832   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:51.940427   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.940848   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.942781   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.943205   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.944832   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:51.948767   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:51.948777   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:54.516634   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:54.526661   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:54.526738   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:54.553106   53550 cri.go:89] found id: ""
	I1213 08:57:54.553120   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.553126   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:54.553132   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:54.553190   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:54.581404   53550 cri.go:89] found id: ""
	I1213 08:57:54.581417   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.581426   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:54.581430   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:54.581484   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:54.605783   53550 cri.go:89] found id: ""
	I1213 08:57:54.605796   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.605803   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:54.605807   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:54.605862   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:54.634146   53550 cri.go:89] found id: ""
	I1213 08:57:54.634160   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.634167   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:54.634171   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:54.634227   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:54.658720   53550 cri.go:89] found id: ""
	I1213 08:57:54.658734   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.658741   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:54.658746   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:54.658803   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:54.683926   53550 cri.go:89] found id: ""
	I1213 08:57:54.683940   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.683947   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:54.683952   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:54.684011   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:54.712272   53550 cri.go:89] found id: ""
	I1213 08:57:54.712286   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.712293   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:54.712300   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:54.712312   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:54.769590   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:54.769607   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:54.781369   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:54.781386   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:54.846793   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:54.837938   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.838582   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840117   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840708   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.842407   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:54.837938   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.838582   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840117   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840708   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.842407   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:54.846803   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:54.846813   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:54.913758   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:54.913778   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:57.444332   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:57.453993   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:57.454058   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:57.478195   53550 cri.go:89] found id: ""
	I1213 08:57:57.478209   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.478225   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:57.478231   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:57.478301   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:57.502242   53550 cri.go:89] found id: ""
	I1213 08:57:57.502269   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.502277   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:57.502282   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:57.502346   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:57.525845   53550 cri.go:89] found id: ""
	I1213 08:57:57.525859   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.525867   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:57.525872   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:57.525931   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:57.549123   53550 cri.go:89] found id: ""
	I1213 08:57:57.549137   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.549143   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:57.549148   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:57.549203   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:57.576988   53550 cri.go:89] found id: ""
	I1213 08:57:57.577002   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.577009   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:57.577019   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:57.577076   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:57.599837   53550 cri.go:89] found id: ""
	I1213 08:57:57.599851   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.599858   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:57.599864   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:57.599932   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:57.623671   53550 cri.go:89] found id: ""
	I1213 08:57:57.623685   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.623693   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:57.623700   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:57.623711   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:57.634031   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:57.634046   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:57.695658   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:57.687110   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.688029   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.689755   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.690097   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.691681   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:57.687110   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.688029   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.689755   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.690097   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.691681   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:57.695668   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:57.695678   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:57.762393   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:57.762412   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:57.790711   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:57.790726   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:00.355817   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:00.372076   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:00.372142   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:00.409377   53550 cri.go:89] found id: ""
	I1213 08:58:00.409392   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.409398   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:00.409404   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:00.409467   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:00.436239   53550 cri.go:89] found id: ""
	I1213 08:58:00.436254   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.436261   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:00.436266   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:00.436326   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:00.461909   53550 cri.go:89] found id: ""
	I1213 08:58:00.461922   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.461929   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:00.461934   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:00.461991   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:00.491257   53550 cri.go:89] found id: ""
	I1213 08:58:00.491270   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.491276   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:00.491281   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:00.491339   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:00.517632   53550 cri.go:89] found id: ""
	I1213 08:58:00.517646   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.517658   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:00.517664   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:00.517726   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:00.543370   53550 cri.go:89] found id: ""
	I1213 08:58:00.543384   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.543391   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:00.543396   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:00.543460   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:00.568967   53550 cri.go:89] found id: ""
	I1213 08:58:00.568980   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.568987   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:00.568995   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:00.569005   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:00.636984   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:00.628336   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629304   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629910   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.631578   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.632039   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:00.628336   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629304   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629910   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.631578   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.632039   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:00.636994   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:00.637006   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:00.699893   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:00.699911   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:00.730182   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:00.730198   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:00.787828   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:00.787847   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:03.298762   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:03.310337   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:03.310399   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:03.345482   53550 cri.go:89] found id: ""
	I1213 08:58:03.345496   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.345503   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:03.345508   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:03.345568   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:03.370651   53550 cri.go:89] found id: ""
	I1213 08:58:03.370664   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.370671   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:03.370676   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:03.370730   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:03.393554   53550 cri.go:89] found id: ""
	I1213 08:58:03.393568   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.393574   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:03.393580   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:03.393638   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:03.418084   53550 cri.go:89] found id: ""
	I1213 08:58:03.418098   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.418105   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:03.418110   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:03.418180   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:03.442426   53550 cri.go:89] found id: ""
	I1213 08:58:03.442440   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.442447   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:03.442451   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:03.442510   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:03.467378   53550 cri.go:89] found id: ""
	I1213 08:58:03.467391   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.467398   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:03.467404   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:03.467539   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:03.493640   53550 cri.go:89] found id: ""
	I1213 08:58:03.493653   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.493660   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:03.493668   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:03.493678   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:03.559295   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:03.551013   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.551897   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553496   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553816   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.555334   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:03.551013   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.551897   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553496   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553816   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.555334   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:03.559305   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:03.559315   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:03.622616   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:03.622633   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:03.656517   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:03.656534   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:03.715111   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:03.715131   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:06.226614   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:06.237139   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:06.237200   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:06.261636   53550 cri.go:89] found id: ""
	I1213 08:58:06.261652   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.261659   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:06.261664   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:06.261727   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:06.293692   53550 cri.go:89] found id: ""
	I1213 08:58:06.293707   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.293714   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:06.293719   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:06.293778   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:06.321565   53550 cri.go:89] found id: ""
	I1213 08:58:06.321578   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.321584   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:06.321589   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:06.321643   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:06.348809   53550 cri.go:89] found id: ""
	I1213 08:58:06.348856   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.348862   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:06.348869   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:06.348925   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:06.378146   53550 cri.go:89] found id: ""
	I1213 08:58:06.378159   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.378166   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:06.378171   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:06.378227   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:06.402993   53550 cri.go:89] found id: ""
	I1213 08:58:06.403006   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.403013   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:06.403019   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:06.403074   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:06.429062   53550 cri.go:89] found id: ""
	I1213 08:58:06.429076   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.429084   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:06.429092   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:06.429102   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:06.485200   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:06.485218   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:06.496017   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:06.496033   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:06.561266   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:06.552621   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.553412   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.554963   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.555537   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.557278   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:06.552621   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.553412   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.554963   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.555537   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.557278   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:06.561275   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:06.561299   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:06.624429   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:06.624451   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:09.152326   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:09.162496   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:09.162552   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:09.187570   53550 cri.go:89] found id: ""
	I1213 08:58:09.187583   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.187590   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:09.187595   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:09.187653   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:09.211361   53550 cri.go:89] found id: ""
	I1213 08:58:09.211375   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.211382   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:09.211387   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:09.211441   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:09.240289   53550 cri.go:89] found id: ""
	I1213 08:58:09.240302   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.240310   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:09.240316   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:09.240381   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:09.263680   53550 cri.go:89] found id: ""
	I1213 08:58:09.263694   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.263701   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:09.263706   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:09.263767   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:09.289437   53550 cri.go:89] found id: ""
	I1213 08:58:09.289451   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.289458   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:09.289463   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:09.289524   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:09.323385   53550 cri.go:89] found id: ""
	I1213 08:58:09.323398   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.323405   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:09.323410   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:09.323467   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:09.353577   53550 cri.go:89] found id: ""
	I1213 08:58:09.353590   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.353597   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:09.353605   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:09.353616   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:09.382787   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:09.382803   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:09.449042   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:09.449060   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:09.460226   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:09.460242   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:09.528091   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:09.519747   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.520489   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522369   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522869   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.524292   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:09.519747   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.520489   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522369   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522869   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.524292   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:09.528102   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:09.528112   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:12.097937   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:12.108009   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:12.108068   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:12.131531   53550 cri.go:89] found id: ""
	I1213 08:58:12.131546   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.131553   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:12.131558   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:12.131621   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:12.161149   53550 cri.go:89] found id: ""
	I1213 08:58:12.161163   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.161170   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:12.161175   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:12.161237   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:12.187318   53550 cri.go:89] found id: ""
	I1213 08:58:12.187332   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.187339   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:12.187344   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:12.187400   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:12.212736   53550 cri.go:89] found id: ""
	I1213 08:58:12.212749   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.212756   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:12.212761   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:12.212818   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:12.236946   53550 cri.go:89] found id: ""
	I1213 08:58:12.236959   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.236967   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:12.236973   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:12.237036   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:12.260663   53550 cri.go:89] found id: ""
	I1213 08:58:12.260677   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.260683   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:12.260690   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:12.260746   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:12.292004   53550 cri.go:89] found id: ""
	I1213 08:58:12.292022   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.292030   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:12.292038   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:12.292055   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:12.338118   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:12.338134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:12.397489   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:12.397527   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:12.408810   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:12.408834   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:12.471195   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:12.462890   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.463457   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465096   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465641   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.467207   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:12.462890   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.463457   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465096   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465641   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.467207   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:12.471207   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:12.471217   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:15.035075   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:15.046491   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:15.046557   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:15.073355   53550 cri.go:89] found id: ""
	I1213 08:58:15.073368   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.073375   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:15.073381   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:15.073444   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:15.098531   53550 cri.go:89] found id: ""
	I1213 08:58:15.098545   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.098553   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:15.098558   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:15.098620   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:15.125009   53550 cri.go:89] found id: ""
	I1213 08:58:15.125024   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.125031   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:15.125036   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:15.125096   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:15.150565   53550 cri.go:89] found id: ""
	I1213 08:58:15.150579   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.150586   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:15.150591   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:15.150650   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:15.176538   53550 cri.go:89] found id: ""
	I1213 08:58:15.176552   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.176559   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:15.176564   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:15.176622   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:15.200435   53550 cri.go:89] found id: ""
	I1213 08:58:15.200449   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.200472   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:15.200477   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:15.200554   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:15.224596   53550 cri.go:89] found id: ""
	I1213 08:58:15.224610   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.224617   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:15.224625   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:15.224636   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:15.299267   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:15.288531   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.289386   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292038   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292350   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.293775   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:15.288531   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.289386   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292038   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292350   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.293775   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:15.299277   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:15.299287   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:15.370114   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:15.370160   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:15.400555   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:15.400569   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:15.458044   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:15.458062   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:17.970291   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:17.980756   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:17.980816   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:18.009453   53550 cri.go:89] found id: ""
	I1213 08:58:18.009470   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.009478   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:18.009483   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:18.009912   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:18.040545   53550 cri.go:89] found id: ""
	I1213 08:58:18.040560   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.040567   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:18.040572   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:18.040634   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:18.065694   53550 cri.go:89] found id: ""
	I1213 08:58:18.065711   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.065721   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:18.065727   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:18.065795   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:18.091133   53550 cri.go:89] found id: ""
	I1213 08:58:18.091147   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.091155   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:18.091169   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:18.091228   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:18.118236   53550 cri.go:89] found id: ""
	I1213 08:58:18.118250   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.118257   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:18.118262   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:18.118321   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:18.141948   53550 cri.go:89] found id: ""
	I1213 08:58:18.141961   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.141968   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:18.141974   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:18.142030   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:18.167116   53550 cri.go:89] found id: ""
	I1213 08:58:18.167130   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.167137   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:18.167145   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:18.167158   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:18.242811   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:18.234010   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.235596   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.236202   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237378   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237833   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:18.234010   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.235596   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.236202   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237378   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237833   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:18.242822   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:18.242833   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:18.314955   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:18.314974   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:18.343207   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:18.343222   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:18.398868   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:18.398887   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:20.911155   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:20.921270   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:20.921329   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:20.949336   53550 cri.go:89] found id: ""
	I1213 08:58:20.949350   53550 logs.go:282] 0 containers: []
	W1213 08:58:20.949356   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:20.949361   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:20.949418   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:20.973382   53550 cri.go:89] found id: ""
	I1213 08:58:20.973395   53550 logs.go:282] 0 containers: []
	W1213 08:58:20.973402   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:20.973408   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:20.973470   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:21.009413   53550 cri.go:89] found id: ""
	I1213 08:58:21.009431   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.009439   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:21.009444   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:21.009508   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:21.038840   53550 cri.go:89] found id: ""
	I1213 08:58:21.038898   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.038906   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:21.038913   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:21.038981   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:21.062283   53550 cri.go:89] found id: ""
	I1213 08:58:21.062296   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.062303   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:21.062308   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:21.062430   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:21.086629   53550 cri.go:89] found id: ""
	I1213 08:58:21.086643   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.086650   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:21.086655   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:21.086725   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:21.113708   53550 cri.go:89] found id: ""
	I1213 08:58:21.113722   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.113729   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:21.113737   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:21.113749   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:21.169462   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:21.169481   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:21.180306   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:21.180328   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:21.242376   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:21.233989   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.234781   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236321   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236625   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.238091   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:21.233989   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.234781   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236321   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236625   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.238091   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:21.242386   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:21.242400   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:21.306044   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:21.306063   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:23.838510   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:23.848550   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:23.848611   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:23.878675   53550 cri.go:89] found id: ""
	I1213 08:58:23.878689   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.878697   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:23.878702   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:23.878770   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:23.904045   53550 cri.go:89] found id: ""
	I1213 08:58:23.904060   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.904067   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:23.904072   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:23.904142   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:23.929949   53550 cri.go:89] found id: ""
	I1213 08:58:23.929963   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.929970   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:23.929975   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:23.930035   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:23.955048   53550 cri.go:89] found id: ""
	I1213 08:58:23.955062   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.955069   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:23.955078   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:23.955136   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:23.979633   53550 cri.go:89] found id: ""
	I1213 08:58:23.979647   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.979654   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:23.979659   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:23.979716   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:24.006479   53550 cri.go:89] found id: ""
	I1213 08:58:24.006495   53550 logs.go:282] 0 containers: []
	W1213 08:58:24.006503   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:24.006520   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:24.006593   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:24.033349   53550 cri.go:89] found id: ""
	I1213 08:58:24.033369   53550 logs.go:282] 0 containers: []
	W1213 08:58:24.033376   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:24.033385   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:24.033395   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:24.060616   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:24.060635   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:24.119305   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:24.119324   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:24.130335   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:24.130350   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:24.197036   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:24.188772   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.189749   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191420   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191829   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.193342   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:24.188772   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.189749   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191420   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191829   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.193342   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:24.197046   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:24.197058   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:26.764306   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:26.775859   53550 kubeadm.go:602] duration metric: took 4m4.554296141s to restartPrimaryControlPlane
	W1213 08:58:26.775922   53550 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 08:58:26.776056   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 08:58:27.191363   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:58:27.204546   53550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:58:27.212501   53550 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 08:58:27.212553   53550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:58:27.220364   53550 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 08:58:27.220373   53550 kubeadm.go:158] found existing configuration files:
	
	I1213 08:58:27.220423   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 08:58:27.228123   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 08:58:27.228179   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 08:58:27.235737   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 08:58:27.243839   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 08:58:27.243909   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:58:27.251406   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 08:58:27.259128   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 08:58:27.259197   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:58:27.266406   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 08:58:27.274290   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 08:58:27.274347   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:58:27.281913   53550 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 08:58:27.321302   53550 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 08:58:27.321349   53550 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 08:58:27.394605   53550 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 08:58:27.394672   53550 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 08:58:27.394706   53550 kubeadm.go:319] OS: Linux
	I1213 08:58:27.394750   53550 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 08:58:27.394798   53550 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 08:58:27.394844   53550 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 08:58:27.394891   53550 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 08:58:27.394938   53550 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 08:58:27.394984   53550 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 08:58:27.395028   53550 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 08:58:27.395075   53550 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 08:58:27.395120   53550 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 08:58:27.462440   53550 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 08:58:27.462546   53550 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 08:58:27.462635   53550 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 08:58:27.476078   53550 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 08:58:27.481288   53550 out.go:252]   - Generating certificates and keys ...
	I1213 08:58:27.481378   53550 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 08:58:27.481454   53550 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 08:58:27.481542   53550 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 08:58:27.481611   53550 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 08:58:27.481690   53550 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 08:58:27.481750   53550 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 08:58:27.481822   53550 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 08:58:27.481892   53550 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 08:58:27.481974   53550 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 08:58:27.482055   53550 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 08:58:27.482101   53550 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 08:58:27.482165   53550 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 08:58:27.905850   53550 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 08:58:28.178703   53550 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 08:58:28.541521   53550 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 08:58:28.686915   53550 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 08:58:29.281245   53550 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 08:58:29.281953   53550 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 08:58:29.285342   53550 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 08:58:29.288544   53550 out.go:252]   - Booting up control plane ...
	I1213 08:58:29.288640   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 08:58:29.288718   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 08:58:29.289378   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 08:58:29.310312   53550 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 08:58:29.310629   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 08:58:29.318324   53550 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 08:58:29.318581   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 08:58:29.318622   53550 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 08:58:29.457400   53550 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 08:58:29.457506   53550 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:02:29.458561   53550 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001216357s
	I1213 09:02:29.458592   53550 kubeadm.go:319] 
	I1213 09:02:29.458674   53550 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:02:29.458746   53550 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:02:29.458876   53550 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:02:29.458882   53550 kubeadm.go:319] 
	I1213 09:02:29.458995   53550 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:02:29.459029   53550 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:02:29.459061   53550 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:02:29.459065   53550 kubeadm.go:319] 
	I1213 09:02:29.463013   53550 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:02:29.463412   53550 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:02:29.463534   53550 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:02:29.463755   53550 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:02:29.463760   53550 kubeadm.go:319] 
	I1213 09:02:29.463824   53550 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 09:02:29.463944   53550 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216357s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 09:02:29.464028   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 09:02:29.874512   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:02:29.888184   53550 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:02:29.888240   53550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:02:29.896053   53550 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:02:29.896063   53550 kubeadm.go:158] found existing configuration files:
	
	I1213 09:02:29.896114   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:02:29.904008   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:02:29.904062   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:02:29.911453   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:02:29.919369   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:02:29.919421   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:02:29.927024   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:02:29.934996   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:02:29.935050   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:02:29.942367   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:02:29.949946   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:02:29.950000   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:02:29.957647   53550 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:02:29.995750   53550 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:02:29.995800   53550 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:02:30.116553   53550 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:02:30.116615   53550 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 09:02:30.116649   53550 kubeadm.go:319] OS: Linux
	I1213 09:02:30.116693   53550 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:02:30.116740   53550 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:02:30.116785   53550 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:02:30.116832   53550 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:02:30.116879   53550 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:02:30.116934   53550 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:02:30.116978   53550 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:02:30.117024   53550 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:02:30.117071   53550 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:02:30.188905   53550 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:02:30.189016   53550 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:02:30.189118   53550 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:02:30.196039   53550 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:02:30.201335   53550 out.go:252]   - Generating certificates and keys ...
	I1213 09:02:30.201440   53550 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:02:30.201521   53550 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:02:30.201609   53550 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:02:30.201670   53550 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:02:30.201747   53550 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:02:30.201835   53550 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:02:30.201908   53550 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:02:30.201970   53550 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:02:30.202045   53550 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:02:30.202116   53550 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:02:30.202153   53550 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:02:30.202209   53550 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:02:30.255550   53550 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:02:30.417221   53550 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:02:30.868435   53550 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:02:31.140633   53550 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:02:31.298069   53550 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:02:31.298995   53550 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:02:31.302412   53550 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:02:31.305750   53550 out.go:252]   - Booting up control plane ...
	I1213 09:02:31.305854   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:02:31.305930   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:02:31.305995   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:02:31.327053   53550 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:02:31.327169   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:02:31.334414   53550 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:02:31.334677   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:02:31.334719   53550 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:02:31.474852   53550 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:02:31.474965   53550 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:06:31.473943   53550 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000237859s
	I1213 09:06:31.473980   53550 kubeadm.go:319] 
	I1213 09:06:31.474081   53550 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:06:31.474292   53550 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:06:31.474479   53550 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:06:31.474488   53550 kubeadm.go:319] 
	I1213 09:06:31.474674   53550 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:06:31.474967   53550 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:06:31.475021   53550 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:06:31.475025   53550 kubeadm.go:319] 
	I1213 09:06:31.479982   53550 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:06:31.480734   53550 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:06:31.480923   53550 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:06:31.481347   53550 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:06:31.481355   53550 kubeadm.go:319] 
	I1213 09:06:31.481475   53550 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:06:31.481540   53550 kubeadm.go:403] duration metric: took 12m9.29303151s to StartCluster
	I1213 09:06:31.481569   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:06:31.481637   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:06:31.505490   53550 cri.go:89] found id: ""
	I1213 09:06:31.505505   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.505511   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:06:31.505516   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:06:31.505576   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:06:31.533408   53550 cri.go:89] found id: ""
	I1213 09:06:31.533422   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.533429   53550 logs.go:284] No container was found matching "etcd"
	I1213 09:06:31.533433   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:06:31.533495   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:06:31.563195   53550 cri.go:89] found id: ""
	I1213 09:06:31.563218   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.563225   53550 logs.go:284] No container was found matching "coredns"
	I1213 09:06:31.563230   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:06:31.563288   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:06:31.588179   53550 cri.go:89] found id: ""
	I1213 09:06:31.588192   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.588199   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:06:31.588204   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:06:31.588262   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:06:31.613124   53550 cri.go:89] found id: ""
	I1213 09:06:31.613137   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.613144   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:06:31.613149   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:06:31.613204   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:06:31.637268   53550 cri.go:89] found id: ""
	I1213 09:06:31.637282   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.637297   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:06:31.637303   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:06:31.637360   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:06:31.661188   53550 cri.go:89] found id: ""
	I1213 09:06:31.661208   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.661214   53550 logs.go:284] No container was found matching "kindnet"
	I1213 09:06:31.661223   53550 logs.go:123] Gathering logs for container status ...
	I1213 09:06:31.661232   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:06:31.690241   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 09:06:31.690257   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:06:31.745899   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 09:06:31.745917   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:06:31.756123   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:06:31.756137   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:06:31.847485   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:06:31.839464   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.840155   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.841679   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.842167   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.843750   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:06:31.839464   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.840155   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.841679   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.842167   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.843750   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:06:31.847496   53550 logs.go:123] Gathering logs for containerd ...
	I1213 09:06:31.847506   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1213 09:06:31.908510   53550 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 09:06:31.908551   53550 out.go:285] * 
	W1213 09:06:31.908654   53550 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:06:31.908704   53550 out.go:285] * 
	W1213 09:06:31.910815   53550 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:06:31.916295   53550 out.go:203] 
	W1213 09:06:31.920097   53550 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:06:31.920144   53550 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 09:06:31.920163   53550 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 09:06:31.923856   53550 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604408682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604420440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604455862Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604474439Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604487756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604500244Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604511765Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604526116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604543576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604572007Z" level=info msg="Connect containerd service"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604858731Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.605398020Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621543154Z" level=info msg="Start subscribing containerd event"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621557350Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621826530Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621766673Z" level=info msg="Start recovering state"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663604117Z" level=info msg="Start event monitor"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663796472Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663882619Z" level=info msg="Start streaming server"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663973525Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664032545Z" level=info msg="runtime interface starting up..."
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664089374Z" level=info msg="starting plugins..."
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664157059Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 08:54:20 functional-074420 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.666240376Z" level=info msg="containerd successfully booted in 0.083219s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:06:33.151590   21050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:33.152314   21050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:33.154077   21050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:33.155422   21050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:33.156181   21050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 09:06:33 up 49 min,  0 user,  load average: 0.05, 0.18, 0.33
	Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:06:29 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:06:30 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 13 09:06:30 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:30 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:30 functional-074420 kubelet[20853]: E1213 09:06:30.319408   20853 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:06:30 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:06:30 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:06:31 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 09:06:31 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:31 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:31 functional-074420 kubelet[20859]: E1213 09:06:31.068248   20859 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:06:31 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:06:31 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:06:31 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 09:06:31 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:31 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:31 functional-074420 kubelet[20944]: E1213 09:06:31.831847   20944 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:06:31 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:06:31 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:06:32 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 09:06:32 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:32 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:32 functional-074420 kubelet[20967]: E1213 09:06:32.593960   20967 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:06:32 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:06:32 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 2 (388.107789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-074420 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-074420 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (61.038937ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-074420 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	        "Created": "2025-12-13T08:39:40.050933605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:39:40.114566965Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
	        "Name": "/functional-074420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	                "LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-074420",
	                "Source": "/var/lib/docker/volumes/functional-074420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074420",
	                "name.minikube.sigs.k8s.io": "functional-074420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
	            "SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e0:c5:f8:aa:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
	                    "EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074420",
	                        "662fb3d52b3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 2 (309.157573ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-049633 image build -t localhost/my-image:functional-049633 testdata/build --alsologtostderr                                                  │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ mount   │ -p functional-049633 --kill=true                                                                                                                        │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ image   │ functional-049633 image ls --format yaml --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image ls --format json --alsologtostderr                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image ls --format table --alsologtostderr                                                                                             │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ image   │ functional-049633 image ls                                                                                                                              │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ delete  │ -p functional-049633                                                                                                                                    │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
	│ start   │ -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │                     │
	│ start   │ -p functional-074420 --alsologtostderr -v=8                                                                                                             │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:47 UTC │                     │
	│ cache   │ functional-074420 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache add registry.k8s.io/pause:latest                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache add minikube-local-cache-test:functional-074420                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ functional-074420 cache delete minikube-local-cache-test:functional-074420                                                                              │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl images                                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │                     │
	│ cache   │ functional-074420 cache reload                                                                                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ kubectl │ functional-074420 kubectl -- --context functional-074420 get pods                                                                                       │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │                     │
	│ start   │ -p functional-074420 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:54:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:54:17.881015   53550 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:54:17.881119   53550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:54:17.881124   53550 out.go:374] Setting ErrFile to fd 2...
	I1213 08:54:17.881127   53550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:54:17.881367   53550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:54:17.881711   53550 out.go:368] Setting JSON to false
	I1213 08:54:17.882486   53550 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2210,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:54:17.882543   53550 start.go:143] virtualization:  
	I1213 08:54:17.885916   53550 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:54:17.888999   53550 notify.go:221] Checking for updates...
	I1213 08:54:17.889435   53550 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:54:17.892383   53550 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:54:17.895200   53550 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:54:17.898042   53550 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:54:17.900839   53550 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 08:54:17.903626   53550 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:54:17.906955   53550 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:54:17.907037   53550 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:54:17.945038   53550 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:54:17.945157   53550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:54:18.004102   53550 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 08:54:17.99317471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:54:18.004214   53550 docker.go:319] overlay module found
	I1213 08:54:18.009730   53550 out.go:179] * Using the docker driver based on existing profile
	I1213 08:54:18.012694   53550 start.go:309] selected driver: docker
	I1213 08:54:18.012706   53550 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:54:18.012816   53550 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:54:18.012919   53550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:54:18.070601   53550 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 08:54:18.060838365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:54:18.071017   53550 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 08:54:18.071040   53550 cni.go:84] Creating CNI manager for ""
	I1213 08:54:18.071105   53550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:54:18.071147   53550 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:54:18.074420   53550 out.go:179] * Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	I1213 08:54:18.077242   53550 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 08:54:18.080227   53550 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:54:18.083176   53550 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:54:18.083216   53550 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 08:54:18.083225   53550 cache.go:65] Caching tarball of preloaded images
	I1213 08:54:18.083262   53550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:54:18.083328   53550 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 08:54:18.083337   53550 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 08:54:18.083454   53550 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
	I1213 08:54:18.104039   53550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:54:18.104049   53550 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:54:18.104071   53550 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:54:18.104097   53550 start.go:360] acquireMachinesLock for functional-074420: {Name:mk9a8356bf81e58530d2c2996b4da0b7487171c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:54:18.104173   53550 start.go:364] duration metric: took 60.013µs to acquireMachinesLock for "functional-074420"
	I1213 08:54:18.104193   53550 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:54:18.104198   53550 fix.go:54] fixHost starting: 
	I1213 08:54:18.104469   53550 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:54:18.121469   53550 fix.go:112] recreateIfNeeded on functional-074420: state=Running err=<nil>
	W1213 08:54:18.121489   53550 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:54:18.124664   53550 out.go:252] * Updating the running docker "functional-074420" container ...
	I1213 08:54:18.124700   53550 machine.go:94] provisionDockerMachine start ...
	I1213 08:54:18.124779   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.142221   53550 main.go:143] libmachine: Using SSH client type: native
	I1213 08:54:18.142535   53550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:54:18.142542   53550 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:54:18.290889   53550 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:54:18.290902   53550 ubuntu.go:182] provisioning hostname "functional-074420"
	I1213 08:54:18.290965   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.308398   53550 main.go:143] libmachine: Using SSH client type: native
	I1213 08:54:18.308699   53550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:54:18.308706   53550 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-074420 && echo "functional-074420" | sudo tee /etc/hostname
	I1213 08:54:18.463898   53550 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:54:18.463977   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.481808   53550 main.go:143] libmachine: Using SSH client type: native
	I1213 08:54:18.482113   53550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:54:18.482128   53550 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-074420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-074420/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-074420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:54:18.639897   53550 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:54:18.639913   53550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 08:54:18.639945   53550 ubuntu.go:190] setting up certificates
	I1213 08:54:18.639960   53550 provision.go:84] configureAuth start
	I1213 08:54:18.640021   53550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:54:18.657069   53550 provision.go:143] copyHostCerts
	I1213 08:54:18.657137   53550 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 08:54:18.657145   53550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:54:18.657224   53550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 08:54:18.657317   53550 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 08:54:18.657321   53550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:54:18.657345   53550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 08:54:18.657393   53550 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 08:54:18.657396   53550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:54:18.657421   53550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 08:54:18.657462   53550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.functional-074420 san=[127.0.0.1 192.168.49.2 functional-074420 localhost minikube]
	I1213 08:54:18.978851   53550 provision.go:177] copyRemoteCerts
	I1213 08:54:18.978913   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:54:18.978954   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.996497   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.099309   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:54:19.116489   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:54:19.134491   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 08:54:19.152584   53550 provision.go:87] duration metric: took 512.603195ms to configureAuth
	I1213 08:54:19.152601   53550 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:54:19.152798   53550 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:54:19.152804   53550 machine.go:97] duration metric: took 1.028099835s to provisionDockerMachine
	I1213 08:54:19.152810   53550 start.go:293] postStartSetup for "functional-074420" (driver="docker")
	I1213 08:54:19.152820   53550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:54:19.152868   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:54:19.152914   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.170238   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.275637   53550 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:54:19.280193   53550 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:54:19.280211   53550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:54:19.280223   53550 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 08:54:19.280276   53550 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 08:54:19.280348   53550 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 08:54:19.280419   53550 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1213 08:54:19.280458   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1213 08:54:19.288420   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:54:19.306689   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1213 08:54:19.324595   53550 start.go:296] duration metric: took 171.770829ms for postStartSetup
	I1213 08:54:19.324673   53550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:54:19.324742   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.347206   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.449063   53550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:54:19.453865   53550 fix.go:56] duration metric: took 1.349660427s for fixHost
	I1213 08:54:19.453881   53550 start.go:83] releasing machines lock for "functional-074420", held for 1.349700469s
	I1213 08:54:19.453945   53550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:54:19.471349   53550 ssh_runner.go:195] Run: cat /version.json
	I1213 08:54:19.471396   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.471420   53550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:54:19.471481   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.492979   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.505163   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.686546   53550 ssh_runner.go:195] Run: systemctl --version
	I1213 08:54:19.692986   53550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 08:54:19.697303   53550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:54:19.697365   53550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:54:19.705133   53550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:54:19.705146   53550 start.go:496] detecting cgroup driver to use...
	I1213 08:54:19.705176   53550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:54:19.705226   53550 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 08:54:19.720729   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:54:19.733460   53550 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:54:19.733514   53550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:54:19.748695   53550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:54:19.761831   53550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:54:19.870034   53550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:54:19.996014   53550 docker.go:234] disabling docker service ...
	I1213 08:54:19.996078   53550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:54:20.014799   53550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:54:20.030104   53550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:54:20.162441   53550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:54:20.283014   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:54:20.297184   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:54:20.311847   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:54:20.321141   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 08:54:20.330609   53550 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:54:20.330677   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:54:20.339444   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:54:20.348072   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:54:20.356752   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:54:20.365663   53550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:54:20.373861   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:54:20.383214   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:54:20.392296   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:54:20.401182   53550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:54:20.408521   53550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:54:20.415857   53550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:54:20.524736   53550 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:54:20.667475   53550 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 08:54:20.667553   53550 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 08:54:20.671249   53550 start.go:564] Will wait 60s for crictl version
	I1213 08:54:20.671308   53550 ssh_runner.go:195] Run: which crictl
	I1213 08:54:20.674869   53550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:54:20.699246   53550 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 08:54:20.699301   53550 ssh_runner.go:195] Run: containerd --version
	I1213 08:54:20.723418   53550 ssh_runner.go:195] Run: containerd --version
	I1213 08:54:20.748134   53550 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 08:54:20.751095   53550 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:54:20.766935   53550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 08:54:20.773949   53550 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 08:54:20.776880   53550 kubeadm.go:884] updating cluster {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:54:20.777036   53550 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:54:20.777116   53550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:54:20.804622   53550 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:54:20.804634   53550 containerd.go:534] Images already preloaded, skipping extraction
	I1213 08:54:20.804691   53550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:54:20.834431   53550 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:54:20.834444   53550 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:54:20.834451   53550 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 08:54:20.834559   53550 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-074420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:54:20.834624   53550 ssh_runner.go:195] Run: sudo crictl info
	I1213 08:54:20.867174   53550 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 08:54:20.867192   53550 cni.go:84] Creating CNI manager for ""
	I1213 08:54:20.867200   53550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:54:20.867220   53550 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:54:20.867242   53550 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-074420 NodeName:functional-074420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:54:20.867356   53550 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-074420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:54:20.867422   53550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:54:20.875127   53550 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:54:20.875185   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:54:20.882880   53550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 08:54:20.898646   53550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:54:20.911841   53550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 08:54:20.925067   53550 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:54:20.928972   53550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:54:21.047902   53550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:54:21.521591   53550 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420 for IP: 192.168.49.2
	I1213 08:54:21.521603   53550 certs.go:195] generating shared ca certs ...
	I1213 08:54:21.521617   53550 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:54:21.521756   53550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 08:54:21.521796   53550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 08:54:21.521802   53550 certs.go:257] generating profile certs ...
	I1213 08:54:21.521883   53550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key
	I1213 08:54:21.521933   53550 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068
	I1213 08:54:21.521973   53550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key
	I1213 08:54:21.522082   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 08:54:21.522113   53550 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 08:54:21.522120   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:54:21.522146   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 08:54:21.522168   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:54:21.522190   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 08:54:21.522232   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:54:21.522796   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:54:21.547463   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:54:21.565502   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:54:21.583029   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 08:54:21.600675   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:54:21.617821   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:54:21.634794   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:54:21.652088   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:54:21.669338   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:54:21.685563   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 08:54:21.702834   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 08:54:21.719220   53550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:54:21.731588   53550 ssh_runner.go:195] Run: openssl version
	I1213 08:54:21.737357   53550 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.744365   53550 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:54:21.751316   53550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.754910   53550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.754961   53550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.795815   53550 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:54:21.802933   53550 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.809987   53550 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 08:54:21.817141   53550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.820600   53550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.820668   53550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.861349   53550 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:54:21.868464   53550 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.875279   53550 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 08:54:21.882257   53550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.885950   53550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.886012   53550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.927672   53550 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:54:21.934830   53550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:54:21.938562   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:54:21.979443   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:54:22.023588   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:54:22.065341   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:54:22.106598   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:54:22.147410   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:54:22.188516   53550 kubeadm.go:401] StartCluster: {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:54:22.188592   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 08:54:22.188655   53550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:54:22.213570   53550 cri.go:89] found id: ""
	I1213 08:54:22.213647   53550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:54:22.221547   53550 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:54:22.221555   53550 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:54:22.221616   53550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:54:22.229060   53550 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.229555   53550 kubeconfig.go:125] found "functional-074420" server: "https://192.168.49.2:8441"
	I1213 08:54:22.232016   53550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:54:22.239904   53550 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 08:39:47.751417218 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 08:54:20.919594824 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 08:54:22.239924   53550 kubeadm.go:1161] stopping kube-system containers ...
	I1213 08:54:22.239936   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 08:54:22.239998   53550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:54:22.266484   53550 cri.go:89] found id: ""
	I1213 08:54:22.266565   53550 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 08:54:22.285823   53550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:54:22.293457   53550 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 13 08:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 08:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 08:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 08:43 /etc/kubernetes/scheduler.conf
	
	I1213 08:54:22.293536   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 08:54:22.301460   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 08:54:22.308894   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.308947   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:54:22.316083   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 08:54:22.323905   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.323959   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:54:22.331273   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 08:54:22.338736   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.338789   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:54:22.346320   53550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:54:22.354109   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:22.400461   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.430760   53550 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.030276983s)
	I1213 08:54:24.430822   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.648055   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.718708   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.760609   53550 api_server.go:52] waiting for apiserver process to appear ...
	I1213 08:54:24.760672   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:25.261709   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:25.761435   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:26.261759   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:26.761732   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:27.260880   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:27.760874   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:28.261721   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:28.761493   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:29.260874   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:29.761189   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:30.260883   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:30.761448   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:31.260872   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:31.761568   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:32.260967   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:32.760840   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:33.261542   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:33.761383   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:34.261771   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:34.761647   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:35.260857   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:35.760860   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:36.261572   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:36.761127   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:37.260746   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:37.760824   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:38.261446   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:38.760828   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:39.261574   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:39.760780   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:40.261697   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:40.760839   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:41.261384   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:41.761710   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:42.261116   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:42.761031   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:43.260886   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:43.760843   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:44.261147   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:44.761415   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:45.260979   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:45.761106   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:46.261523   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:46.760830   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:47.261517   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:47.760776   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:48.261547   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:48.761373   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:49.260826   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:49.761574   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:50.261136   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:50.761757   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:51.261197   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:51.761646   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:52.261300   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:52.761696   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:53.260864   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:53.761574   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:54.260768   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:54.761242   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:55.261350   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:55.761559   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:56.261198   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:56.761477   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:57.261567   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:57.760861   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:58.261803   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:58.760843   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:59.260868   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:59.761676   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:00.261517   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:00.761052   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:01.260802   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:01.760882   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:02.260924   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:02.760742   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:03.261542   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:03.761112   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:04.260813   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:04.761741   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:05.261225   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:05.760863   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:06.261426   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:06.761616   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:07.260888   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:07.760944   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:08.260874   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:08.761422   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:09.261767   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:09.761735   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:10.261376   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:10.760871   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:11.261761   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:11.761199   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:12.260928   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:12.761700   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:13.261570   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:13.761185   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:14.261662   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:14.760883   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:15.260866   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:15.761804   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:16.261789   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:16.761363   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:17.260776   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:17.761086   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:18.261288   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:18.760851   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:19.261191   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:19.761422   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:20.261577   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:20.761202   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:21.260768   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:21.761795   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:22.260945   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:22.761690   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:23.260860   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:23.761430   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:24.261657   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:24.761756   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:24.761828   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:24.786217   53550 cri.go:89] found id: ""
	I1213 08:55:24.786236   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.786243   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:24.786249   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:24.786328   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:24.809104   53550 cri.go:89] found id: ""
	I1213 08:55:24.809118   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.809125   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:24.809130   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:24.809187   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:24.832861   53550 cri.go:89] found id: ""
	I1213 08:55:24.832880   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.832887   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:24.832892   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:24.832949   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:24.856552   53550 cri.go:89] found id: ""
	I1213 08:55:24.856566   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.856573   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:24.856578   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:24.856634   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:24.879617   53550 cri.go:89] found id: ""
	I1213 08:55:24.879631   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.879638   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:24.879643   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:24.879700   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:24.905506   53550 cri.go:89] found id: ""
	I1213 08:55:24.905520   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.905526   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:24.905532   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:24.905588   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:24.930567   53550 cri.go:89] found id: ""
	I1213 08:55:24.930581   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.930587   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:24.930595   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:24.930605   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:24.961663   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:24.961679   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:25.017689   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:25.017709   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:25.035228   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:25.035257   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:25.112728   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:25.103316   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.103954   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.105905   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.106753   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.108540   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:25.103316   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.103954   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.105905   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.106753   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.108540   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:25.112738   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:25.112750   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:27.676671   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:27.686646   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:27.686705   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:27.710449   53550 cri.go:89] found id: ""
	I1213 08:55:27.710462   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.710469   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:27.710474   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:27.710531   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:27.734910   53550 cri.go:89] found id: ""
	I1213 08:55:27.734923   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.734943   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:27.734949   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:27.735007   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:27.762767   53550 cri.go:89] found id: ""
	I1213 08:55:27.762787   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.762794   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:27.762799   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:27.762853   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:27.789263   53550 cri.go:89] found id: ""
	I1213 08:55:27.789282   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.789288   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:27.789293   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:27.789352   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:27.817361   53550 cri.go:89] found id: ""
	I1213 08:55:27.817374   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.817381   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:27.817386   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:27.817444   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:27.841034   53550 cri.go:89] found id: ""
	I1213 08:55:27.841047   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.841054   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:27.841059   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:27.841114   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:27.865949   53550 cri.go:89] found id: ""
	I1213 08:55:27.865963   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.865970   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:27.865978   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:27.865988   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:27.921352   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:27.921372   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:27.934950   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:27.934966   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:28.012009   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:27.997838   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:27.998688   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.000757   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.005522   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.006421   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:27.997838   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:27.998688   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.000757   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.005522   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.006421   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:28.012023   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:28.012036   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:28.081214   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:28.081231   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:30.614736   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:30.624755   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:30.624816   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:30.650174   53550 cri.go:89] found id: ""
	I1213 08:55:30.650188   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.650195   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:30.650200   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:30.650257   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:30.675572   53550 cri.go:89] found id: ""
	I1213 08:55:30.675585   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.675592   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:30.675597   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:30.675661   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:30.700274   53550 cri.go:89] found id: ""
	I1213 08:55:30.700288   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.700295   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:30.700301   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:30.700357   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:30.724242   53550 cri.go:89] found id: ""
	I1213 08:55:30.724255   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.724262   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:30.724267   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:30.724322   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:30.749004   53550 cri.go:89] found id: ""
	I1213 08:55:30.749018   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.749025   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:30.749029   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:30.749091   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:30.772837   53550 cri.go:89] found id: ""
	I1213 08:55:30.772850   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.772857   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:30.772862   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:30.772917   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:30.796329   53550 cri.go:89] found id: ""
	I1213 08:55:30.796343   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.796350   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:30.796358   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:30.796369   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:30.806800   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:30.806816   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:30.869919   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:30.861579   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.861934   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.863644   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.864162   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.865728   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:30.861579   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.861934   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.863644   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.864162   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.865728   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:30.869929   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:30.869939   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:30.936472   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:30.936496   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:30.965152   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:30.965167   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:33.525938   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:33.536142   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:33.536204   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:33.560265   53550 cri.go:89] found id: ""
	I1213 08:55:33.560279   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.560286   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:33.560291   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:33.560346   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:33.584181   53550 cri.go:89] found id: ""
	I1213 08:55:33.584194   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.584201   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:33.584206   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:33.584261   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:33.612544   53550 cri.go:89] found id: ""
	I1213 08:55:33.612558   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.612566   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:33.612571   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:33.612628   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:33.636515   53550 cri.go:89] found id: ""
	I1213 08:55:33.636529   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.636536   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:33.636541   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:33.636601   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:33.661822   53550 cri.go:89] found id: ""
	I1213 08:55:33.661835   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.661842   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:33.661847   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:33.661909   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:33.694727   53550 cri.go:89] found id: ""
	I1213 08:55:33.694741   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.694748   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:33.694753   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:33.694812   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:33.721852   53550 cri.go:89] found id: ""
	I1213 08:55:33.721866   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.721873   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:33.721882   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:33.721892   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:33.789428   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:33.780794   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.781535   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783102   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783765   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.785318   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:33.780794   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.781535   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783102   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783765   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.785318   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:33.789438   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:33.789448   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:33.851847   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:33.851865   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:33.879583   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:33.879599   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:33.937089   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:33.937108   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:36.449743   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:36.459975   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:36.460040   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:36.485035   53550 cri.go:89] found id: ""
	I1213 08:55:36.485048   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.485055   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:36.485060   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:36.485116   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:36.509956   53550 cri.go:89] found id: ""
	I1213 08:55:36.509970   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.509977   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:36.509983   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:36.510040   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:36.535929   53550 cri.go:89] found id: ""
	I1213 08:55:36.535942   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.535949   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:36.535954   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:36.536014   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:36.560722   53550 cri.go:89] found id: ""
	I1213 08:55:36.560735   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.560742   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:36.560747   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:36.560818   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:36.586434   53550 cri.go:89] found id: ""
	I1213 08:55:36.586448   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.586455   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:36.586459   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:36.586517   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:36.615482   53550 cri.go:89] found id: ""
	I1213 08:55:36.615506   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.615531   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:36.615536   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:36.615611   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:36.642408   53550 cri.go:89] found id: ""
	I1213 08:55:36.642422   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.642439   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:36.642446   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:36.642457   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:36.669924   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:36.669946   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:36.728697   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:36.728717   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:36.740739   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:36.740759   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:36.807194   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:36.798750   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.799446   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.801401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.802010   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.803502   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:36.798750   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.799446   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.801401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.802010   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.803502   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:36.807204   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:36.807218   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:39.369875   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:39.380141   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:39.380202   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:39.407846   53550 cri.go:89] found id: ""
	I1213 08:55:39.407859   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.407867   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:39.407872   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:39.407929   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:39.432500   53550 cri.go:89] found id: ""
	I1213 08:55:39.432514   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.432520   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:39.432525   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:39.432584   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:39.457872   53550 cri.go:89] found id: ""
	I1213 08:55:39.457886   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.457893   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:39.457898   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:39.457961   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:39.483359   53550 cri.go:89] found id: ""
	I1213 08:55:39.483373   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.483379   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:39.483384   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:39.483458   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:39.508786   53550 cri.go:89] found id: ""
	I1213 08:55:39.508800   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.508807   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:39.508812   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:39.508879   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:39.533162   53550 cri.go:89] found id: ""
	I1213 08:55:39.533177   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.533184   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:39.533189   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:39.533247   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:39.558039   53550 cri.go:89] found id: ""
	I1213 08:55:39.558052   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.558059   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:39.558067   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:39.558076   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:39.618400   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:39.618423   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:39.629575   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:39.629592   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:39.694333   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:39.686653   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.687216   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.688669   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.689084   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.690548   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:39.686653   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.687216   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.688669   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.689084   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.690548   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:39.694344   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:39.694355   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:39.757320   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:39.757338   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:42.285019   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:42.297179   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:42.297241   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:42.328576   53550 cri.go:89] found id: ""
	I1213 08:55:42.328589   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.328611   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:42.328616   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:42.328678   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:42.356055   53550 cri.go:89] found id: ""
	I1213 08:55:42.356069   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.356077   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:42.356082   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:42.356141   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:42.380770   53550 cri.go:89] found id: ""
	I1213 08:55:42.380783   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.380790   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:42.380796   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:42.380866   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:42.409446   53550 cri.go:89] found id: ""
	I1213 08:55:42.409460   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.409466   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:42.409471   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:42.409530   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:42.433502   53550 cri.go:89] found id: ""
	I1213 08:55:42.433515   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.433522   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:42.433527   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:42.433583   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:42.458312   53550 cri.go:89] found id: ""
	I1213 08:55:42.458325   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.458336   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:42.458341   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:42.458401   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:42.482681   53550 cri.go:89] found id: ""
	I1213 08:55:42.482694   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.482702   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:42.482709   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:42.482719   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:42.544167   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:42.544185   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:42.572064   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:42.572079   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:42.629874   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:42.629892   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:42.641069   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:42.641084   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:42.704996   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:42.696662   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.697320   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.698873   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.699442   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.701188   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:42.696662   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.697320   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.698873   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.699442   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.701188   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:45.206980   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:45.225798   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:45.225900   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:45.260556   53550 cri.go:89] found id: ""
	I1213 08:55:45.260579   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.260586   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:45.260592   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:45.260660   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:45.300170   53550 cri.go:89] found id: ""
	I1213 08:55:45.300183   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.300190   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:45.300195   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:45.300253   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:45.335036   53550 cri.go:89] found id: ""
	I1213 08:55:45.335050   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.335057   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:45.335062   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:45.335123   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:45.366574   53550 cri.go:89] found id: ""
	I1213 08:55:45.366587   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.366594   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:45.366599   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:45.366659   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:45.391767   53550 cri.go:89] found id: ""
	I1213 08:55:45.391781   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.391788   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:45.391793   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:45.391850   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:45.416855   53550 cri.go:89] found id: ""
	I1213 08:55:45.416869   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.416876   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:45.416882   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:45.416941   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:45.441837   53550 cri.go:89] found id: ""
	I1213 08:55:45.441859   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.441867   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:45.441875   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:45.441885   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:45.499186   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:45.499203   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:45.510383   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:45.510401   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:45.577305   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:45.568662   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.569333   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.570928   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.571293   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.572858   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:45.568662   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.569333   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.570928   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.571293   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.572858   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:45.577329   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:45.577340   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:45.639739   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:45.639761   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:48.174772   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:48.185188   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:48.185250   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:48.210180   53550 cri.go:89] found id: ""
	I1213 08:55:48.210194   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.210200   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:48.210205   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:48.210268   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:48.235002   53550 cri.go:89] found id: ""
	I1213 08:55:48.235015   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.235022   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:48.235027   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:48.235085   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:48.259922   53550 cri.go:89] found id: ""
	I1213 08:55:48.259936   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.259943   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:48.259948   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:48.260007   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:48.304590   53550 cri.go:89] found id: ""
	I1213 08:55:48.304605   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.304611   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:48.304617   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:48.304675   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:48.342676   53550 cri.go:89] found id: ""
	I1213 08:55:48.342690   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.342697   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:48.342703   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:48.342759   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:48.366578   53550 cri.go:89] found id: ""
	I1213 08:55:48.366592   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.366599   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:48.366604   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:48.366673   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:48.391059   53550 cri.go:89] found id: ""
	I1213 08:55:48.391073   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.391080   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:48.391089   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:48.391099   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:48.462962   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:48.454137   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.455049   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.456663   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.457133   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.458628   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:48.454137   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.455049   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.456663   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.457133   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.458628   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:48.462973   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:48.462988   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:48.526213   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:48.526232   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:48.556890   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:48.556905   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:48.613408   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:48.613425   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:51.124505   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:51.134928   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:51.134985   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:51.159141   53550 cri.go:89] found id: ""
	I1213 08:55:51.159154   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.159161   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:51.159166   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:51.159222   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:51.182690   53550 cri.go:89] found id: ""
	I1213 08:55:51.182704   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.182711   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:51.182716   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:51.182773   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:51.208685   53550 cri.go:89] found id: ""
	I1213 08:55:51.208698   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.208705   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:51.208710   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:51.208766   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:51.233183   53550 cri.go:89] found id: ""
	I1213 08:55:51.233197   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.233204   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:51.233209   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:51.233270   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:51.258042   53550 cri.go:89] found id: ""
	I1213 08:55:51.258069   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.258076   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:51.258081   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:51.258147   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:51.290467   53550 cri.go:89] found id: ""
	I1213 08:55:51.290481   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.290488   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:51.290495   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:51.290566   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:51.323202   53550 cri.go:89] found id: ""
	I1213 08:55:51.323216   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.323223   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:51.323231   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:51.323240   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:51.394188   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:51.394206   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:51.426214   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:51.426230   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:51.485838   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:51.485855   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:51.496565   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:51.496580   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:51.576933   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:51.567900   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.568559   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570248   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570845   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.572406   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:51.567900   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.568559   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570248   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570845   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.572406   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:54.077204   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:54.087800   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:54.087874   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:54.113040   53550 cri.go:89] found id: ""
	I1213 08:55:54.113055   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.113062   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:54.113067   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:54.113124   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:54.138822   53550 cri.go:89] found id: ""
	I1213 08:55:54.138835   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.138842   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:54.138847   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:54.138906   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:54.163439   53550 cri.go:89] found id: ""
	I1213 08:55:54.163452   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.163459   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:54.163465   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:54.163557   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:54.188125   53550 cri.go:89] found id: ""
	I1213 08:55:54.188138   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.188145   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:54.188152   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:54.188208   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:54.212893   53550 cri.go:89] found id: ""
	I1213 08:55:54.212907   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.212914   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:54.212920   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:54.212981   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:54.237373   53550 cri.go:89] found id: ""
	I1213 08:55:54.237386   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.237393   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:54.237399   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:54.237459   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:54.265504   53550 cri.go:89] found id: ""
	I1213 08:55:54.265518   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.265525   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:54.265532   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:54.265542   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:54.333125   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:54.333143   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:54.347402   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:54.347418   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:54.412166   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:54.403644   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.404415   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406100   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406397   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.408104   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:54.403644   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.404415   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406100   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406397   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.408104   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:54.412175   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:54.412187   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:54.480709   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:54.480730   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:57.010334   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:57.021059   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:57.021120   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:57.047281   53550 cri.go:89] found id: ""
	I1213 08:55:57.047294   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.047301   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:57.047306   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:57.047377   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:57.071416   53550 cri.go:89] found id: ""
	I1213 08:55:57.071429   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.071436   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:57.071441   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:57.071498   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:57.101079   53550 cri.go:89] found id: ""
	I1213 08:55:57.101092   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.101104   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:57.101110   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:57.101166   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:57.125577   53550 cri.go:89] found id: ""
	I1213 08:55:57.125591   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.125598   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:57.125603   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:57.125664   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:57.150869   53550 cri.go:89] found id: ""
	I1213 08:55:57.150883   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.150890   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:57.150895   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:57.150952   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:57.175181   53550 cri.go:89] found id: ""
	I1213 08:55:57.175196   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.175203   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:57.175209   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:57.175265   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:57.201951   53550 cri.go:89] found id: ""
	I1213 08:55:57.201964   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.201981   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:57.201989   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:57.202000   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:57.230175   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:57.230191   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:57.289371   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:57.289389   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:57.301801   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:57.301816   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:57.376259   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:57.367385   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.368081   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.369821   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.370486   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.372258   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:57.367385   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.368081   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.369821   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.370486   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.372258   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:57.376279   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:57.376290   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:59.938203   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:59.948941   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:59.949015   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:59.975053   53550 cri.go:89] found id: ""
	I1213 08:55:59.975067   53550 logs.go:282] 0 containers: []
	W1213 08:55:59.975074   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:59.975079   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:59.975140   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:00.036168   53550 cri.go:89] found id: ""
	I1213 08:56:00.036184   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.036198   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:00.036204   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:00.036272   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:00.212433   53550 cri.go:89] found id: ""
	I1213 08:56:00.212448   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.212457   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:00.212463   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:00.212534   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:00.329892   53550 cri.go:89] found id: ""
	I1213 08:56:00.329922   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.329931   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:00.329937   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:00.330147   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:00.418357   53550 cri.go:89] found id: ""
	I1213 08:56:00.418382   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.418390   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:00.418395   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:00.418485   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:00.472022   53550 cri.go:89] found id: ""
	I1213 08:56:00.472038   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.472057   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:00.472063   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:00.472147   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:00.501778   53550 cri.go:89] found id: ""
	I1213 08:56:00.501793   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.501800   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:00.501809   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:00.501821   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:00.514889   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:00.514908   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:00.586263   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:00.576477   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.577506   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.579365   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.580284   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.582096   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:00.576477   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.577506   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.579365   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.580284   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.582096   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:00.586275   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:00.586286   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:00.651709   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:00.651729   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:00.679944   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:00.679961   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:03.240030   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:03.250487   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:03.250564   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:03.276985   53550 cri.go:89] found id: ""
	I1213 08:56:03.276999   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.277006   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:03.277011   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:03.277079   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:03.305874   53550 cri.go:89] found id: ""
	I1213 08:56:03.305887   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.305894   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:03.305900   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:03.305961   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:03.332792   53550 cri.go:89] found id: ""
	I1213 08:56:03.332805   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.332812   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:03.332817   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:03.332875   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:03.359327   53550 cri.go:89] found id: ""
	I1213 08:56:03.359340   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.359347   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:03.359352   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:03.359414   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:03.383789   53550 cri.go:89] found id: ""
	I1213 08:56:03.383802   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.383818   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:03.383823   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:03.383881   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:03.409294   53550 cri.go:89] found id: ""
	I1213 08:56:03.409308   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.409315   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:03.409320   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:03.409380   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:03.433579   53550 cri.go:89] found id: ""
	I1213 08:56:03.433593   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.433600   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:03.433608   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:03.433620   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:03.444272   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:03.444288   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:03.513583   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:03.505953   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.506372   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507548   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507861   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.509300   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:03.505953   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.506372   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507548   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507861   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.509300   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:03.513594   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:03.513605   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:03.576629   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:03.576649   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:03.608162   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:03.608178   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:06.165156   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:06.175029   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:06.175086   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:06.199548   53550 cri.go:89] found id: ""
	I1213 08:56:06.199561   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.199567   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:06.199573   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:06.199630   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:06.223345   53550 cri.go:89] found id: ""
	I1213 08:56:06.223358   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.223365   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:06.223370   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:06.223427   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:06.253772   53550 cri.go:89] found id: ""
	I1213 08:56:06.253785   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.253792   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:06.253797   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:06.253862   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:06.285197   53550 cri.go:89] found id: ""
	I1213 08:56:06.285209   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.285216   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:06.285221   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:06.285287   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:06.311117   53550 cri.go:89] found id: ""
	I1213 08:56:06.311130   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.311137   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:06.311142   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:06.311199   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:06.347101   53550 cri.go:89] found id: ""
	I1213 08:56:06.347115   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.347121   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:06.347134   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:06.347212   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:06.373093   53550 cri.go:89] found id: ""
	I1213 08:56:06.373106   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.373113   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:06.373121   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:06.373131   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:06.432261   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:06.432286   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:06.443840   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:06.443858   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:06.510711   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:06.501971   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.502684   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.504393   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.505195   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.506872   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:06.501971   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.502684   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.504393   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.505195   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.506872   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:06.510722   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:06.510745   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:06.572342   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:06.572360   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:09.099708   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:09.109781   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:09.109837   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:09.134708   53550 cri.go:89] found id: ""
	I1213 08:56:09.134722   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.134729   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:09.134734   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:09.134793   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:09.159277   53550 cri.go:89] found id: ""
	I1213 08:56:09.159291   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.159297   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:09.159302   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:09.159367   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:09.185743   53550 cri.go:89] found id: ""
	I1213 08:56:09.185756   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.185763   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:09.185768   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:09.185827   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:09.209881   53550 cri.go:89] found id: ""
	I1213 08:56:09.209894   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.209901   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:09.209907   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:09.209963   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:09.233078   53550 cri.go:89] found id: ""
	I1213 08:56:09.233091   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.233099   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:09.233104   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:09.233165   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:09.261187   53550 cri.go:89] found id: ""
	I1213 08:56:09.261200   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.261208   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:09.261216   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:09.261274   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:09.303988   53550 cri.go:89] found id: ""
	I1213 08:56:09.304001   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.304008   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:09.304016   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:09.304035   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:09.366963   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:09.366982   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:09.377754   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:09.377770   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:09.445863   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:09.437325   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.437871   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.439698   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.440090   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.442054   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:09.437325   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.437871   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.439698   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.440090   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.442054   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:09.445873   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:09.445884   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:09.507900   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:09.507918   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:12.036492   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:12.046919   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:12.046978   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:12.071196   53550 cri.go:89] found id: ""
	I1213 08:56:12.071211   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.071218   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:12.071223   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:12.071285   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:12.097508   53550 cri.go:89] found id: ""
	I1213 08:56:12.097522   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.097529   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:12.097534   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:12.097591   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:12.122628   53550 cri.go:89] found id: ""
	I1213 08:56:12.122641   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.122649   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:12.122654   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:12.122714   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:12.147292   53550 cri.go:89] found id: ""
	I1213 08:56:12.147306   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.147313   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:12.147318   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:12.147385   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:12.171601   53550 cri.go:89] found id: ""
	I1213 08:56:12.171615   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.171622   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:12.171629   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:12.171685   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:12.195241   53550 cri.go:89] found id: ""
	I1213 08:56:12.195255   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.195272   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:12.195277   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:12.195332   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:12.220835   53550 cri.go:89] found id: ""
	I1213 08:56:12.220849   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.220866   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:12.220874   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:12.220883   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:12.283214   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:12.283232   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:12.322176   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:12.322192   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:12.382990   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:12.383007   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:12.393976   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:12.393993   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:12.454561   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:12.446899   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.447411   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.448884   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.449202   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.450694   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:12.446899   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.447411   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.448884   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.449202   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.450694   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:14.956323   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:14.966379   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:14.966439   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:14.992786   53550 cri.go:89] found id: ""
	I1213 08:56:14.992801   53550 logs.go:282] 0 containers: []
	W1213 08:56:14.992807   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:14.992813   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:14.992876   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:15.028638   53550 cri.go:89] found id: ""
	I1213 08:56:15.028653   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.028660   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:15.028666   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:15.028735   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:15.059274   53550 cri.go:89] found id: ""
	I1213 08:56:15.059288   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.059295   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:15.059301   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:15.059408   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:15.089311   53550 cri.go:89] found id: ""
	I1213 08:56:15.089324   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.089331   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:15.089336   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:15.089401   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:15.118691   53550 cri.go:89] found id: ""
	I1213 08:56:15.118705   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.118712   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:15.118717   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:15.118773   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:15.144494   53550 cri.go:89] found id: ""
	I1213 08:56:15.144507   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.144514   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:15.144519   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:15.144577   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:15.173885   53550 cri.go:89] found id: ""
	I1213 08:56:15.173899   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.173905   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:15.173914   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:15.173925   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:15.236112   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:15.228066   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.228792   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230471   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230772   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.232236   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:15.228066   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.228792   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230471   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230772   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.232236   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:15.236121   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:15.236134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:15.298113   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:15.298131   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:15.342964   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:15.342980   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:15.400545   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:15.400563   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:17.911444   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:17.921343   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:17.921402   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:17.947826   53550 cri.go:89] found id: ""
	I1213 08:56:17.947840   53550 logs.go:282] 0 containers: []
	W1213 08:56:17.947847   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:17.947852   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:17.947908   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:17.971346   53550 cri.go:89] found id: ""
	I1213 08:56:17.971376   53550 logs.go:282] 0 containers: []
	W1213 08:56:17.971383   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:17.971387   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:17.971449   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:17.999271   53550 cri.go:89] found id: ""
	I1213 08:56:17.999285   53550 logs.go:282] 0 containers: []
	W1213 08:56:17.999292   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:17.999298   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:17.999371   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:18.031971   53550 cri.go:89] found id: ""
	I1213 08:56:18.031984   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.031991   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:18.031996   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:18.032058   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:18.057098   53550 cri.go:89] found id: ""
	I1213 08:56:18.057112   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.057119   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:18.057127   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:18.057187   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:18.081981   53550 cri.go:89] found id: ""
	I1213 08:56:18.082007   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.082014   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:18.082021   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:18.082092   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:18.108138   53550 cri.go:89] found id: ""
	I1213 08:56:18.108152   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.108159   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:18.108166   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:18.108179   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:18.118705   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:18.118723   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:18.182232   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:18.173836   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.174533   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176073   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176393   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.177924   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:18.173836   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.174533   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176073   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176393   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.177924   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:18.182242   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:18.182253   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:18.243585   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:18.243606   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:18.292655   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:18.292671   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:20.860353   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:20.870680   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:20.870753   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:20.895485   53550 cri.go:89] found id: ""
	I1213 08:56:20.895499   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.895506   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:20.895532   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:20.895592   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:20.921461   53550 cri.go:89] found id: ""
	I1213 08:56:20.921475   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.921482   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:20.921486   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:20.921545   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:20.946484   53550 cri.go:89] found id: ""
	I1213 08:56:20.946498   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.946507   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:20.946512   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:20.946570   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:20.971723   53550 cri.go:89] found id: ""
	I1213 08:56:20.971737   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.971744   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:20.971749   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:20.971806   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:20.996903   53550 cri.go:89] found id: ""
	I1213 08:56:20.996917   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.996924   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:20.996929   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:20.996987   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:21.025270   53550 cri.go:89] found id: ""
	I1213 08:56:21.025283   53550 logs.go:282] 0 containers: []
	W1213 08:56:21.025290   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:21.025295   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:21.025354   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:21.050984   53550 cri.go:89] found id: ""
	I1213 08:56:21.050998   53550 logs.go:282] 0 containers: []
	W1213 08:56:21.051005   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:21.051013   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:21.051024   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:21.061853   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:21.061867   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:21.130720   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:21.122077   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.122912   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124411   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124796   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.126237   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:21.122077   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.122912   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124411   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124796   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.126237   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:21.130741   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:21.130753   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:21.194629   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:21.194647   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:21.222790   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:21.222806   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:23.780448   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:23.790523   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:23.790584   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:23.815703   53550 cri.go:89] found id: ""
	I1213 08:56:23.815717   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.815724   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:23.815729   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:23.815790   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:23.844047   53550 cri.go:89] found id: ""
	I1213 08:56:23.844062   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.844069   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:23.844074   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:23.844132   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:23.868824   53550 cri.go:89] found id: ""
	I1213 08:56:23.868837   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.868844   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:23.868849   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:23.868908   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:23.893054   53550 cri.go:89] found id: ""
	I1213 08:56:23.893067   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.893084   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:23.893089   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:23.893158   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:23.918102   53550 cri.go:89] found id: ""
	I1213 08:56:23.918115   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.918141   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:23.918146   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:23.918221   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:23.943674   53550 cri.go:89] found id: ""
	I1213 08:56:23.943706   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.943713   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:23.943719   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:23.943780   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:23.969229   53550 cri.go:89] found id: ""
	I1213 08:56:23.969242   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.969250   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:23.969258   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:23.969268   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:24.024433   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:24.024452   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:24.036371   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:24.036394   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:24.106333   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:24.097769   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.098607   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.100363   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.101001   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.102517   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:24.097769   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.098607   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.100363   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.101001   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.102517   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:24.106343   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:24.106354   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:24.169184   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:24.169204   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:26.698614   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:26.708577   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:26.708633   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:26.732922   53550 cri.go:89] found id: ""
	I1213 08:56:26.732936   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.732943   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:26.732948   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:26.733006   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:26.755987   53550 cri.go:89] found id: ""
	I1213 08:56:26.756000   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.756007   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:26.756012   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:26.756070   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:26.780069   53550 cri.go:89] found id: ""
	I1213 08:56:26.780082   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.780089   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:26.780094   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:26.780152   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:26.803904   53550 cri.go:89] found id: ""
	I1213 08:56:26.803916   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.803923   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:26.803928   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:26.803983   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:26.829092   53550 cri.go:89] found id: ""
	I1213 08:56:26.829106   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.829114   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:26.829119   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:26.829177   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:26.853845   53550 cri.go:89] found id: ""
	I1213 08:56:26.853858   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.853865   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:26.853870   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:26.853925   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:26.878415   53550 cri.go:89] found id: ""
	I1213 08:56:26.878428   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.878435   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:26.878443   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:26.878452   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:26.934265   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:26.934282   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:26.945523   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:26.945543   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:27.018637   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:27.009719   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.010496   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012205   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012866   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.014551   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:27.009719   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.010496   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012205   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012866   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.014551   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:27.018647   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:27.018658   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:27.084954   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:27.084972   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:29.613085   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:29.622947   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:29.623004   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:29.646959   53550 cri.go:89] found id: ""
	I1213 08:56:29.646973   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.646980   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:29.646986   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:29.647044   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:29.671745   53550 cri.go:89] found id: ""
	I1213 08:56:29.671759   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.671766   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:29.671771   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:29.671827   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:29.695958   53550 cri.go:89] found id: ""
	I1213 08:56:29.695972   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.695979   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:29.695984   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:29.696042   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:29.720480   53550 cri.go:89] found id: ""
	I1213 08:56:29.720494   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.720501   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:29.720506   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:29.720561   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:29.744988   53550 cri.go:89] found id: ""
	I1213 08:56:29.745001   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.745008   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:29.745013   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:29.745069   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:29.768515   53550 cri.go:89] found id: ""
	I1213 08:56:29.768529   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.768536   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:29.768541   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:29.768600   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:29.792772   53550 cri.go:89] found id: ""
	I1213 08:56:29.792791   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.792798   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:29.792806   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:29.792815   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:29.848125   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:29.848143   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:29.859353   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:29.859369   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:29.922416   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:29.914324   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.914996   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.915957   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.916631   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.918102   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:29.914324   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.914996   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.915957   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.916631   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.918102   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:29.922426   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:29.922438   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:29.991606   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:29.991633   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:32.539218   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:32.551358   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:32.551433   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:32.579755   53550 cri.go:89] found id: ""
	I1213 08:56:32.579769   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.579776   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:32.579782   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:32.579840   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:32.606298   53550 cri.go:89] found id: ""
	I1213 08:56:32.606312   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.606319   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:32.606325   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:32.606386   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:32.631992   53550 cri.go:89] found id: ""
	I1213 08:56:32.632006   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.632023   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:32.632028   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:32.632086   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:32.663992   53550 cri.go:89] found id: ""
	I1213 08:56:32.664006   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.664013   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:32.664019   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:32.664079   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:32.688738   53550 cri.go:89] found id: ""
	I1213 08:56:32.688752   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.688759   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:32.688764   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:32.688824   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:32.714559   53550 cri.go:89] found id: ""
	I1213 08:56:32.714573   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.714590   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:32.714596   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:32.714663   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:32.741559   53550 cri.go:89] found id: ""
	I1213 08:56:32.741573   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.741579   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:32.741587   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:32.741597   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:32.800820   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:32.800838   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:32.811825   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:32.811840   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:32.885502   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:32.876782   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.877261   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.879781   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.880103   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.881563   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:32.876782   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.877261   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.879781   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.880103   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.881563   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:32.885513   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:32.885525   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:32.948272   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:32.948291   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:35.480322   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:35.490281   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:35.490342   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:35.514866   53550 cri.go:89] found id: ""
	I1213 08:56:35.514880   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.514891   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:35.514896   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:35.514956   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:35.547423   53550 cri.go:89] found id: ""
	I1213 08:56:35.547436   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.547443   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:35.547449   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:35.547529   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:35.576485   53550 cri.go:89] found id: ""
	I1213 08:56:35.576499   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.576506   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:35.576511   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:35.576569   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:35.602583   53550 cri.go:89] found id: ""
	I1213 08:56:35.602597   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.602604   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:35.602610   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:35.602671   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:35.628894   53550 cri.go:89] found id: ""
	I1213 08:56:35.628908   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.628915   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:35.628920   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:35.628983   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:35.657754   53550 cri.go:89] found id: ""
	I1213 08:56:35.657768   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.657775   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:35.657780   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:35.657838   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:35.682178   53550 cri.go:89] found id: ""
	I1213 08:56:35.682192   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.682198   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:35.682207   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:35.682218   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:35.692814   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:35.692830   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:35.755108   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:35.747202   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.747982   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749544   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749858   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.751344   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:35.747202   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.747982   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749544   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749858   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.751344   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:35.755119   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:35.755130   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:35.819728   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:35.819749   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:35.848015   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:35.848031   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:38.404654   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:38.414683   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:38.414742   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:38.441126   53550 cri.go:89] found id: ""
	I1213 08:56:38.441140   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.441147   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:38.441152   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:38.441214   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:38.465511   53550 cri.go:89] found id: ""
	I1213 08:56:38.465524   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.465545   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:38.465550   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:38.465606   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:38.489339   53550 cri.go:89] found id: ""
	I1213 08:56:38.489353   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.489359   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:38.489364   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:38.489418   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:38.513685   53550 cri.go:89] found id: ""
	I1213 08:56:38.513699   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.513706   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:38.513711   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:38.513768   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:38.542115   53550 cri.go:89] found id: ""
	I1213 08:56:38.542128   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.542135   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:38.542140   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:38.542204   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:38.569759   53550 cri.go:89] found id: ""
	I1213 08:56:38.569772   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.569778   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:38.569784   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:38.569842   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:38.596740   53550 cri.go:89] found id: ""
	I1213 08:56:38.596754   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.596761   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:38.596769   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:38.596780   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:38.654316   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:38.654335   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:38.665035   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:38.665050   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:38.729308   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:38.720643   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.721501   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723232   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723577   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.725294   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:38.720643   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.721501   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723232   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723577   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.725294   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:38.729317   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:38.729330   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:38.790889   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:38.790908   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:41.323859   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:41.335168   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:41.335228   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:41.361007   53550 cri.go:89] found id: ""
	I1213 08:56:41.361021   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.361028   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:41.361033   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:41.361090   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:41.385773   53550 cri.go:89] found id: ""
	I1213 08:56:41.385787   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.385794   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:41.385799   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:41.385857   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:41.415146   53550 cri.go:89] found id: ""
	I1213 08:56:41.415160   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.415174   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:41.415179   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:41.415235   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:41.441108   53550 cri.go:89] found id: ""
	I1213 08:56:41.441122   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.441129   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:41.441134   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:41.441190   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:41.475987   53550 cri.go:89] found id: ""
	I1213 08:56:41.476001   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.476008   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:41.476014   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:41.476073   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:41.499775   53550 cri.go:89] found id: ""
	I1213 08:56:41.499789   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.499796   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:41.499801   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:41.499861   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:41.528901   53550 cri.go:89] found id: ""
	I1213 08:56:41.528914   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.528931   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:41.528939   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:41.528956   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:41.589661   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:41.589678   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:41.602123   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:41.602138   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:41.667706   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:41.659176   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.659929   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.661608   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.662073   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.663743   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:41.659176   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.659929   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.661608   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.662073   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.663743   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:41.667715   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:41.667735   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:41.730253   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:41.730270   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:44.257671   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:44.269222   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:44.269293   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:44.294398   53550 cri.go:89] found id: ""
	I1213 08:56:44.294412   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.294419   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:44.294423   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:44.294484   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:44.319070   53550 cri.go:89] found id: ""
	I1213 08:56:44.319084   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.319092   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:44.319097   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:44.319155   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:44.343392   53550 cri.go:89] found id: ""
	I1213 08:56:44.343405   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.343420   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:44.343425   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:44.343485   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:44.367894   53550 cri.go:89] found id: ""
	I1213 08:56:44.367909   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.367924   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:44.367929   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:44.367993   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:44.393473   53550 cri.go:89] found id: ""
	I1213 08:56:44.393487   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.393505   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:44.393511   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:44.393579   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:44.419150   53550 cri.go:89] found id: ""
	I1213 08:56:44.419164   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.419171   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:44.419177   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:44.419236   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:44.445826   53550 cri.go:89] found id: ""
	I1213 08:56:44.445839   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.445846   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:44.445854   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:44.445864   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:44.473670   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:44.473686   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:44.532419   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:44.532439   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:44.545059   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:44.545075   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:44.621942   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:44.613047   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.613768   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.615594   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.616292   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.618008   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:44.613047   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.613768   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.615594   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.616292   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.618008   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:44.621960   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:44.621970   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:47.187660   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:47.197939   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:47.197999   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:47.223309   53550 cri.go:89] found id: ""
	I1213 08:56:47.223328   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.223335   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:47.223341   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:47.223404   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:47.248945   53550 cri.go:89] found id: ""
	I1213 08:56:47.248958   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.248965   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:47.248971   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:47.249030   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:47.277058   53550 cri.go:89] found id: ""
	I1213 08:56:47.277072   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.277079   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:47.277084   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:47.277141   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:47.301116   53550 cri.go:89] found id: ""
	I1213 08:56:47.301130   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.301137   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:47.301151   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:47.301209   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:47.323965   53550 cri.go:89] found id: ""
	I1213 08:56:47.323979   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.323987   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:47.323992   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:47.324050   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:47.348999   53550 cri.go:89] found id: ""
	I1213 08:56:47.349019   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.349027   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:47.349032   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:47.349092   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:47.373783   53550 cri.go:89] found id: ""
	I1213 08:56:47.373797   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.373803   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:47.373811   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:47.373820   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:47.429021   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:47.429039   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:47.439785   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:47.439801   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:47.500829   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:47.492613   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.493257   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.494833   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.495318   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.496858   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:47.492613   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.493257   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.494833   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.495318   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.496858   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:47.500840   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:47.500850   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:47.568111   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:47.568130   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:50.110119   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:50.120537   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:50.120602   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:50.148966   53550 cri.go:89] found id: ""
	I1213 08:56:50.148980   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.148986   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:50.148991   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:50.149046   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:50.177907   53550 cri.go:89] found id: ""
	I1213 08:56:50.177921   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.177928   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:50.177933   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:50.177996   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:50.203131   53550 cri.go:89] found id: ""
	I1213 08:56:50.203144   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.203151   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:50.203155   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:50.203262   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:50.226237   53550 cri.go:89] found id: ""
	I1213 08:56:50.226257   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.226264   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:50.226269   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:50.226327   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:50.253758   53550 cri.go:89] found id: ""
	I1213 08:56:50.253773   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.253779   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:50.253784   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:50.253843   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:50.278302   53550 cri.go:89] found id: ""
	I1213 08:56:50.278315   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.278322   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:50.278327   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:50.278392   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:50.309556   53550 cri.go:89] found id: ""
	I1213 08:56:50.309569   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.309576   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:50.309584   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:50.309594   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:50.320066   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:50.320081   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:50.382949   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:50.374268   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.375072   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.376878   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.377528   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.379007   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:50.374268   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.375072   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.376878   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.377528   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.379007   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:50.382958   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:50.382969   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:50.444351   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:50.444370   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:50.470781   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:50.470797   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:53.028628   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:53.039130   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:53.039200   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:53.063996   53550 cri.go:89] found id: ""
	I1213 08:56:53.064009   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.064015   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:53.064020   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:53.064076   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:53.088275   53550 cri.go:89] found id: ""
	I1213 08:56:53.088289   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.088296   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:53.088300   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:53.088358   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:53.111773   53550 cri.go:89] found id: ""
	I1213 08:56:53.111786   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.111793   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:53.111808   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:53.111887   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:53.137026   53550 cri.go:89] found id: ""
	I1213 08:56:53.137040   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.137046   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:53.137051   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:53.137107   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:53.160335   53550 cri.go:89] found id: ""
	I1213 08:56:53.160349   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.160356   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:53.160361   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:53.160416   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:53.184713   53550 cri.go:89] found id: ""
	I1213 08:56:53.184726   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.184733   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:53.184738   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:53.184795   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:53.208847   53550 cri.go:89] found id: ""
	I1213 08:56:53.208861   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.208868   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:53.208875   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:53.208886   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:53.266985   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:53.267004   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:53.277388   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:53.277404   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:53.340191   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:53.332012   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.332685   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334293   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334789   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.336280   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:53.332012   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.332685   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334293   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334789   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.336280   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:53.340200   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:53.340211   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:53.401706   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:53.401724   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:55.928555   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:55.939550   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:55.939616   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:55.965405   53550 cri.go:89] found id: ""
	I1213 08:56:55.965419   53550 logs.go:282] 0 containers: []
	W1213 08:56:55.965426   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:55.965431   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:55.965498   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:55.992150   53550 cri.go:89] found id: ""
	I1213 08:56:55.992164   53550 logs.go:282] 0 containers: []
	W1213 08:56:55.992171   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:55.992175   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:55.992230   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:56.016602   53550 cri.go:89] found id: ""
	I1213 08:56:56.016616   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.016623   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:56.016628   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:56.016689   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:56.042580   53550 cri.go:89] found id: ""
	I1213 08:56:56.042593   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.042600   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:56.042605   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:56.042662   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:56.068761   53550 cri.go:89] found id: ""
	I1213 08:56:56.068775   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.068782   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:56.068787   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:56.068848   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:56.093033   53550 cri.go:89] found id: ""
	I1213 08:56:56.093048   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.093055   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:56.093061   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:56.093126   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:56.117228   53550 cri.go:89] found id: ""
	I1213 08:56:56.117241   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.117248   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:56.117255   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:56.117266   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:56.176992   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:56.177011   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:56.188270   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:56.188285   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:56.253019   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:56.245144   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.245941   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247417   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247852   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.249325   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:56.245144   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.245941   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247417   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247852   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.249325   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:56.253029   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:56.253039   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:56.317674   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:56.317696   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:58.848619   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:58.859053   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:58.859112   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:58.885409   53550 cri.go:89] found id: ""
	I1213 08:56:58.885423   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.885430   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:58.885436   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:58.885494   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:58.910222   53550 cri.go:89] found id: ""
	I1213 08:56:58.910236   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.910243   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:58.910249   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:58.910325   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:58.934888   53550 cri.go:89] found id: ""
	I1213 08:56:58.934902   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.934909   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:58.934914   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:58.934973   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:58.959400   53550 cri.go:89] found id: ""
	I1213 08:56:58.959413   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.959420   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:58.959426   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:58.959487   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:58.983607   53550 cri.go:89] found id: ""
	I1213 08:56:58.983621   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.983627   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:58.983651   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:58.983710   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:59.013864   53550 cri.go:89] found id: ""
	I1213 08:56:59.013879   53550 logs.go:282] 0 containers: []
	W1213 08:56:59.013886   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:59.013892   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:59.013953   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:59.039411   53550 cri.go:89] found id: ""
	I1213 08:56:59.039425   53550 logs.go:282] 0 containers: []
	W1213 08:56:59.039432   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:59.039475   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:59.039485   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:59.096733   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:59.096753   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:59.107622   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:59.107636   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:59.174925   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:59.166857   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.167747   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169305   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169619   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.171088   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:59.166857   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.167747   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169305   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169619   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.171088   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:59.174934   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:59.174947   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:59.241043   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:59.241063   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:01.772758   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:01.783635   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:01.783701   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:01.809991   53550 cri.go:89] found id: ""
	I1213 08:57:01.810006   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.810012   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:01.810017   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:01.810077   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:01.839186   53550 cri.go:89] found id: ""
	I1213 08:57:01.839200   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.839207   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:01.839212   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:01.839280   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:01.863706   53550 cri.go:89] found id: ""
	I1213 08:57:01.863720   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.863727   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:01.863733   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:01.863802   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:01.888840   53550 cri.go:89] found id: ""
	I1213 08:57:01.888853   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.888866   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:01.888871   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:01.888931   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:01.915920   53550 cri.go:89] found id: ""
	I1213 08:57:01.915933   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.915940   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:01.915944   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:01.916002   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:01.945751   53550 cri.go:89] found id: ""
	I1213 08:57:01.945765   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.945771   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:01.945776   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:01.945845   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:01.970743   53550 cri.go:89] found id: ""
	I1213 08:57:01.970757   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.970765   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:01.970773   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:01.970782   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:02.026866   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:02.026889   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:02.038522   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:02.038539   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:02.102348   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:02.094261   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.095133   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.096614   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.097026   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.098541   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:02.094261   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.095133   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.096614   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.097026   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.098541   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:02.102361   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:02.102375   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:02.169043   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:02.169063   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:04.696543   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:04.706341   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:04.706437   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:04.731229   53550 cri.go:89] found id: ""
	I1213 08:57:04.731243   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.731250   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:04.731255   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:04.731313   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:04.755649   53550 cri.go:89] found id: ""
	I1213 08:57:04.755664   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.755671   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:04.755675   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:04.755731   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:04.792911   53550 cri.go:89] found id: ""
	I1213 08:57:04.792925   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.792932   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:04.792937   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:04.793004   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:04.819883   53550 cri.go:89] found id: ""
	I1213 08:57:04.819898   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.819905   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:04.819910   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:04.819977   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:04.849837   53550 cri.go:89] found id: ""
	I1213 08:57:04.849851   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.849858   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:04.849863   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:04.849918   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:04.874858   53550 cri.go:89] found id: ""
	I1213 08:57:04.874882   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.874890   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:04.874895   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:04.874960   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:04.903606   53550 cri.go:89] found id: ""
	I1213 08:57:04.903627   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.903634   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:04.903643   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:04.903654   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:04.974645   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:04.965728   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.966550   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968066   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968742   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.970242   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:04.965728   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.966550   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968066   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968742   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.970242   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:04.974655   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:04.974665   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:05.042463   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:05.042483   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:05.073448   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:05.073463   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:05.138728   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:05.138751   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:07.650339   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:07.660396   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:07.660456   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:07.683873   53550 cri.go:89] found id: ""
	I1213 08:57:07.683886   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.683893   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:07.683898   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:07.683955   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:07.708331   53550 cri.go:89] found id: ""
	I1213 08:57:07.708345   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.708352   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:07.708357   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:07.708413   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:07.732899   53550 cri.go:89] found id: ""
	I1213 08:57:07.732913   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.732920   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:07.732925   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:07.732984   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:07.757287   53550 cri.go:89] found id: ""
	I1213 08:57:07.757301   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.757308   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:07.757313   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:07.757384   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:07.795374   53550 cri.go:89] found id: ""
	I1213 08:57:07.795387   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.795394   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:07.795399   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:07.795464   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:07.825153   53550 cri.go:89] found id: ""
	I1213 08:57:07.825167   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.825173   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:07.825182   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:07.825237   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:07.852307   53550 cri.go:89] found id: ""
	I1213 08:57:07.852321   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.852327   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:07.852336   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:07.852345   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:07.880059   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:07.880077   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:07.939241   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:07.939258   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:07.949880   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:07.949895   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:08.020565   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:08.011681   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.012563   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014158   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014873   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.016465   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:08.011681   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.012563   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014158   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014873   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.016465   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:08.020576   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:08.020587   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:10.587648   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:10.597489   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:10.597549   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:10.628550   53550 cri.go:89] found id: ""
	I1213 08:57:10.628564   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.628571   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:10.628579   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:10.628636   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:10.652715   53550 cri.go:89] found id: ""
	I1213 08:57:10.652728   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.652735   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:10.652740   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:10.652800   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:10.676571   53550 cri.go:89] found id: ""
	I1213 08:57:10.676585   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.676591   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:10.676596   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:10.676656   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:10.701425   53550 cri.go:89] found id: ""
	I1213 08:57:10.701439   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.701446   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:10.701451   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:10.701512   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:10.725031   53550 cri.go:89] found id: ""
	I1213 08:57:10.725044   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.725051   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:10.725056   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:10.725115   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:10.748783   53550 cri.go:89] found id: ""
	I1213 08:57:10.748796   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.748803   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:10.748808   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:10.748865   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:10.782351   53550 cri.go:89] found id: ""
	I1213 08:57:10.782364   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.782371   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:10.782379   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:10.782389   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:10.795735   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:10.795751   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:10.871365   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:10.863538   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.863981   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865416   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865845   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.867361   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:10.863538   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.863981   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865416   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865845   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.867361   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:10.871375   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:10.871386   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:10.934169   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:10.934186   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:10.960579   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:10.960595   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:13.522265   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:13.532592   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:13.532651   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:13.557594   53550 cri.go:89] found id: ""
	I1213 08:57:13.557607   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.557614   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:13.557622   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:13.557678   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:13.582015   53550 cri.go:89] found id: ""
	I1213 08:57:13.582029   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.582036   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:13.582041   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:13.582101   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:13.606414   53550 cri.go:89] found id: ""
	I1213 08:57:13.606430   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.606437   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:13.606442   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:13.606501   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:13.633258   53550 cri.go:89] found id: ""
	I1213 08:57:13.633271   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.633278   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:13.633283   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:13.633347   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:13.657138   53550 cri.go:89] found id: ""
	I1213 08:57:13.657151   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.657158   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:13.657163   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:13.657220   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:13.680740   53550 cri.go:89] found id: ""
	I1213 08:57:13.680754   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.680760   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:13.680766   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:13.680821   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:13.704953   53550 cri.go:89] found id: ""
	I1213 08:57:13.704966   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.704973   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:13.704981   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:13.704992   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:13.770673   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:13.760145   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.760885   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762410   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762907   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.764469   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:13.760145   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.760885   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762410   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762907   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.764469   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:13.770683   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:13.770696   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:13.840896   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:13.840915   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:13.870203   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:13.870219   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:13.927703   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:13.927721   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:16.440308   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:16.450569   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:16.450632   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:16.477483   53550 cri.go:89] found id: ""
	I1213 08:57:16.477497   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.477503   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:16.477508   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:16.477565   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:16.502333   53550 cri.go:89] found id: ""
	I1213 08:57:16.502347   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.502354   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:16.502369   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:16.502428   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:16.532266   53550 cri.go:89] found id: ""
	I1213 08:57:16.532282   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.532288   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:16.532293   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:16.532350   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:16.560396   53550 cri.go:89] found id: ""
	I1213 08:57:16.560410   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.560417   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:16.560422   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:16.560478   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:16.588855   53550 cri.go:89] found id: ""
	I1213 08:57:16.588868   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.588875   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:16.588881   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:16.588940   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:16.613011   53550 cri.go:89] found id: ""
	I1213 08:57:16.613024   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.613031   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:16.613036   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:16.613093   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:16.637627   53550 cri.go:89] found id: ""
	I1213 08:57:16.637641   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.637648   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:16.637655   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:16.637665   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:16.694489   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:16.694506   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:16.705456   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:16.705471   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:16.774554   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:16.762987   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.763565   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765176   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765796   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.767462   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:16.762987   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.763565   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765176   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765796   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.767462   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:16.774565   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:16.774577   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:16.840799   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:16.840818   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:19.370819   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:19.380996   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:19.381057   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:19.405680   53550 cri.go:89] found id: ""
	I1213 08:57:19.405694   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.405701   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:19.405707   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:19.405765   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:19.434562   53550 cri.go:89] found id: ""
	I1213 08:57:19.434575   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.434583   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:19.434588   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:19.434645   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:19.460752   53550 cri.go:89] found id: ""
	I1213 08:57:19.460765   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.460772   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:19.460777   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:19.460833   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:19.486494   53550 cri.go:89] found id: ""
	I1213 08:57:19.486508   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.486515   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:19.486520   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:19.486580   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:19.515809   53550 cri.go:89] found id: ""
	I1213 08:57:19.515824   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.515830   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:19.515835   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:19.515892   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:19.541206   53550 cri.go:89] found id: ""
	I1213 08:57:19.541219   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.541226   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:19.541231   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:19.541298   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:19.565992   53550 cri.go:89] found id: ""
	I1213 08:57:19.566005   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.566012   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:19.566020   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:19.566030   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:19.593821   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:19.593836   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:19.650142   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:19.650161   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:19.660963   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:19.660978   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:19.726595   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:19.718869   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.719407   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.720893   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.721354   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.722818   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:19.718869   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.719407   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.720893   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.721354   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.722818   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:19.726604   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:19.726615   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:22.290630   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:22.300536   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:22.300595   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:22.323650   53550 cri.go:89] found id: ""
	I1213 08:57:22.323663   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.323670   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:22.323675   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:22.323738   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:22.346879   53550 cri.go:89] found id: ""
	I1213 08:57:22.346892   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.346899   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:22.346904   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:22.346958   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:22.370613   53550 cri.go:89] found id: ""
	I1213 08:57:22.370627   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.370633   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:22.370638   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:22.370695   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:22.397037   53550 cri.go:89] found id: ""
	I1213 08:57:22.397051   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.397057   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:22.397062   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:22.397120   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:22.420786   53550 cri.go:89] found id: ""
	I1213 08:57:22.420799   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.420806   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:22.420811   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:22.420873   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:22.445029   53550 cri.go:89] found id: ""
	I1213 08:57:22.445043   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.445050   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:22.445056   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:22.445112   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:22.468674   53550 cri.go:89] found id: ""
	I1213 08:57:22.468688   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.468694   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:22.468702   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:22.468712   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:22.495304   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:22.495322   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:22.552462   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:22.552479   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:22.562826   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:22.562841   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:22.622604   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:22.615033   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.615580   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.616785   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.617360   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.618835   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:22.615033   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.615580   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.616785   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.617360   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.618835   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:22.622614   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:22.622625   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:25.187376   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:25.197281   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:25.197340   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:25.224829   53550 cri.go:89] found id: ""
	I1213 08:57:25.224843   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.224850   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:25.224855   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:25.224914   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:25.253288   53550 cri.go:89] found id: ""
	I1213 08:57:25.253303   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.253310   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:25.253315   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:25.253371   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:25.277253   53550 cri.go:89] found id: ""
	I1213 08:57:25.277267   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.277274   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:25.277279   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:25.277338   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:25.303815   53550 cri.go:89] found id: ""
	I1213 08:57:25.303828   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.303835   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:25.303840   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:25.303901   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:25.328041   53550 cri.go:89] found id: ""
	I1213 08:57:25.328054   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.328060   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:25.328065   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:25.328123   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:25.356334   53550 cri.go:89] found id: ""
	I1213 08:57:25.356348   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.356355   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:25.356369   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:25.356424   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:25.380096   53550 cri.go:89] found id: ""
	I1213 08:57:25.380110   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.380116   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:25.380124   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:25.380134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:25.439426   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:25.439444   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:25.449905   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:25.449921   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:25.512900   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:25.504371   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.504993   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.506701   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.507271   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.508895   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:25.504371   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.504993   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.506701   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.507271   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.508895   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:25.512910   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:25.512920   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:25.575756   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:25.575775   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:28.103479   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:28.113820   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:28.113880   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:28.139011   53550 cri.go:89] found id: ""
	I1213 08:57:28.139026   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.139033   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:28.139038   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:28.139097   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:28.169622   53550 cri.go:89] found id: ""
	I1213 08:57:28.169635   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.169642   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:28.169647   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:28.169707   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:28.197421   53550 cri.go:89] found id: ""
	I1213 08:57:28.197436   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.197443   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:28.197448   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:28.197504   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:28.221931   53550 cri.go:89] found id: ""
	I1213 08:57:28.221945   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.221952   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:28.221957   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:28.222019   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:28.245719   53550 cri.go:89] found id: ""
	I1213 08:57:28.245732   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.245739   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:28.245744   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:28.245801   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:28.273087   53550 cri.go:89] found id: ""
	I1213 08:57:28.273101   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.273108   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:28.273113   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:28.273170   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:28.299359   53550 cri.go:89] found id: ""
	I1213 08:57:28.299372   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.299379   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:28.299388   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:28.299398   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:28.355178   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:28.355195   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:28.365905   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:28.365921   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:28.430892   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:28.422372   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.423034   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.424584   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.425047   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.426553   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:28.422372   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.423034   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.424584   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.425047   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.426553   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:28.430909   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:28.430919   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:28.493985   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:28.494008   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:31.028636   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:31.039540   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:31.039600   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:31.067565   53550 cri.go:89] found id: ""
	I1213 08:57:31.067579   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.067586   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:31.067591   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:31.067649   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:31.103967   53550 cri.go:89] found id: ""
	I1213 08:57:31.103994   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.104001   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:31.104006   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:31.104072   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:31.128428   53550 cri.go:89] found id: ""
	I1213 08:57:31.128455   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.128462   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:31.128467   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:31.128535   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:31.157837   53550 cri.go:89] found id: ""
	I1213 08:57:31.157851   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.157857   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:31.157864   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:31.157920   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:31.182139   53550 cri.go:89] found id: ""
	I1213 08:57:31.182153   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.182160   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:31.182165   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:31.182221   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:31.206203   53550 cri.go:89] found id: ""
	I1213 08:57:31.206217   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.206224   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:31.206229   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:31.206284   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:31.230290   53550 cri.go:89] found id: ""
	I1213 08:57:31.230304   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.230311   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:31.230319   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:31.230335   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:31.240760   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:31.240775   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:31.306114   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:31.297892   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.298559   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300121   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300425   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.301865   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:31.297892   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.298559   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300121   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300425   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.301865   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:31.306123   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:31.306134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:31.372771   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:31.372790   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:31.402327   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:31.402342   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:33.959197   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:33.969353   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:33.969420   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:33.994169   53550 cri.go:89] found id: ""
	I1213 08:57:33.994183   53550 logs.go:282] 0 containers: []
	W1213 08:57:33.994190   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:33.994195   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:33.994253   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:34.022338   53550 cri.go:89] found id: ""
	I1213 08:57:34.022367   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.022375   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:34.022380   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:34.022457   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:34.055485   53550 cri.go:89] found id: ""
	I1213 08:57:34.055547   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.055563   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:34.055569   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:34.055645   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:34.085397   53550 cri.go:89] found id: ""
	I1213 08:57:34.085411   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.085419   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:34.085424   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:34.085487   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:34.112540   53550 cri.go:89] found id: ""
	I1213 08:57:34.112553   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.112561   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:34.112566   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:34.112622   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:34.137911   53550 cri.go:89] found id: ""
	I1213 08:57:34.137934   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.137942   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:34.137947   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:34.138013   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:34.165184   53550 cri.go:89] found id: ""
	I1213 08:57:34.165197   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.165204   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:34.165213   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:34.165224   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:34.221937   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:34.221954   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:34.232900   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:34.232925   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:34.299398   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:34.291492   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.291917   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.293598   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.294066   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.295548   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:34.291492   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.291917   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.293598   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.294066   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.295548   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:34.299409   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:34.299422   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:34.362086   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:34.362104   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:36.894643   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:36.904509   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:36.904571   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:36.928971   53550 cri.go:89] found id: ""
	I1213 08:57:36.928986   53550 logs.go:282] 0 containers: []
	W1213 08:57:36.928993   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:36.928998   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:36.929055   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:36.963924   53550 cri.go:89] found id: ""
	I1213 08:57:36.963938   53550 logs.go:282] 0 containers: []
	W1213 08:57:36.963945   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:36.963956   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:36.964015   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:36.989352   53550 cri.go:89] found id: ""
	I1213 08:57:36.989366   53550 logs.go:282] 0 containers: []
	W1213 08:57:36.989373   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:36.989378   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:36.989435   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:37.022947   53550 cri.go:89] found id: ""
	I1213 08:57:37.022973   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.022982   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:37.022987   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:37.023065   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:37.058627   53550 cri.go:89] found id: ""
	I1213 08:57:37.058642   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.058649   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:37.058654   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:37.058711   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:37.093026   53550 cri.go:89] found id: ""
	I1213 08:57:37.093047   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.093054   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:37.093059   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:37.093127   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:37.119099   53550 cri.go:89] found id: ""
	I1213 08:57:37.119113   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.119120   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:37.119127   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:37.119142   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:37.129746   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:37.129770   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:37.192251   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:37.184204   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.185001   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186648   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186953   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.188434   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:37.184204   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.185001   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186648   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186953   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.188434   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:37.192263   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:37.192274   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:37.258678   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:37.258697   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:37.286406   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:37.286421   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:39.843274   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:39.853155   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:39.853221   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:39.880613   53550 cri.go:89] found id: ""
	I1213 08:57:39.880627   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.880634   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:39.880639   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:39.880695   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:39.908166   53550 cri.go:89] found id: ""
	I1213 08:57:39.908179   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.908191   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:39.908197   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:39.908255   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:39.931780   53550 cri.go:89] found id: ""
	I1213 08:57:39.931803   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.931811   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:39.931816   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:39.931885   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:39.959597   53550 cri.go:89] found id: ""
	I1213 08:57:39.959610   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.959617   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:39.959622   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:39.959678   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:39.987876   53550 cri.go:89] found id: ""
	I1213 08:57:39.987889   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.987896   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:39.987901   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:39.987955   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:40.032588   53550 cri.go:89] found id: ""
	I1213 08:57:40.032603   53550 logs.go:282] 0 containers: []
	W1213 08:57:40.032610   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:40.032615   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:40.032675   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:40.061908   53550 cri.go:89] found id: ""
	I1213 08:57:40.061922   53550 logs.go:282] 0 containers: []
	W1213 08:57:40.061929   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:40.061937   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:40.061947   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:40.126971   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:40.126990   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:40.143091   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:40.143107   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:40.207107   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:40.198855   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.199548   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201367   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201930   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.203470   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:40.198855   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.199548   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201367   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201930   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.203470   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:40.207117   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:40.207127   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:40.276818   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:40.276842   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:42.806068   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:42.816147   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:42.816212   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:42.844268   53550 cri.go:89] found id: ""
	I1213 08:57:42.844281   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.844288   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:42.844294   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:42.844353   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:42.869114   53550 cri.go:89] found id: ""
	I1213 08:57:42.869127   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.869134   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:42.869139   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:42.869195   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:42.892972   53550 cri.go:89] found id: ""
	I1213 08:57:42.892986   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.892993   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:42.892998   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:42.893072   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:42.916620   53550 cri.go:89] found id: ""
	I1213 08:57:42.916633   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.916640   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:42.916646   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:42.916702   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:42.940313   53550 cri.go:89] found id: ""
	I1213 08:57:42.940327   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.940334   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:42.940339   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:42.940394   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:42.965365   53550 cri.go:89] found id: ""
	I1213 08:57:42.965379   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.965386   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:42.965391   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:42.965451   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:42.990702   53550 cri.go:89] found id: ""
	I1213 08:57:42.990715   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.990722   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:42.990729   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:42.990742   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:43.048989   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:43.049008   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:43.061818   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:43.061836   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:43.129375   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:43.121071   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.121717   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.123392   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.124082   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.125650   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:43.121071   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.121717   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.123392   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.124082   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.125650   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:43.129386   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:43.129396   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:43.191354   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:43.191373   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:45.723775   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:45.733853   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:45.733913   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:45.765626   53550 cri.go:89] found id: ""
	I1213 08:57:45.765639   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.765646   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:45.765652   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:45.765713   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:45.793721   53550 cri.go:89] found id: ""
	I1213 08:57:45.793734   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.793741   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:45.793746   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:45.793802   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:45.822307   53550 cri.go:89] found id: ""
	I1213 08:57:45.822320   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.822341   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:45.822347   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:45.822411   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:45.851368   53550 cri.go:89] found id: ""
	I1213 08:57:45.851382   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.851390   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:45.851395   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:45.851454   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:45.877295   53550 cri.go:89] found id: ""
	I1213 08:57:45.877308   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.877321   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:45.877326   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:45.877382   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:45.905661   53550 cri.go:89] found id: ""
	I1213 08:57:45.905674   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.905681   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:45.905686   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:45.905745   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:45.934028   53550 cri.go:89] found id: ""
	I1213 08:57:45.934042   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.934050   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:45.934058   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:45.934068   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:45.962148   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:45.962164   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:46.017986   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:46.018005   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:46.031923   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:46.031939   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:46.106367   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:46.098570   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.099183   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.100789   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.101326   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.102477   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:46.098570   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.099183   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.100789   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.101326   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.102477   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:46.106379   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:46.106389   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:48.670805   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:48.680874   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:48.680935   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:48.704942   53550 cri.go:89] found id: ""
	I1213 08:57:48.704955   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.704962   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:48.704968   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:48.705029   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:48.729965   53550 cri.go:89] found id: ""
	I1213 08:57:48.729979   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.729986   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:48.729991   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:48.730048   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:48.754712   53550 cri.go:89] found id: ""
	I1213 08:57:48.754726   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.754733   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:48.754739   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:48.754798   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:48.786991   53550 cri.go:89] found id: ""
	I1213 08:57:48.787014   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.787021   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:48.787026   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:48.787082   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:48.812918   53550 cri.go:89] found id: ""
	I1213 08:57:48.812932   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.812939   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:48.812943   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:48.813010   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:48.841512   53550 cri.go:89] found id: ""
	I1213 08:57:48.841525   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.841533   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:48.841538   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:48.841597   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:48.866500   53550 cri.go:89] found id: ""
	I1213 08:57:48.866514   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.866521   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:48.866529   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:48.866539   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:48.922975   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:48.922993   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:48.933525   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:48.933540   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:48.995831   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:48.987715   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.988587   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990160   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990461   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.991987   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:48.987715   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.988587   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990160   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990461   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.991987   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:48.995841   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:48.995852   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:49.061866   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:49.061885   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:51.594845   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:51.606962   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:51.607021   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:51.630371   53550 cri.go:89] found id: ""
	I1213 08:57:51.630390   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.630397   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:51.630402   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:51.630456   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:51.655753   53550 cri.go:89] found id: ""
	I1213 08:57:51.655768   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.655775   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:51.655780   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:51.655835   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:51.680116   53550 cri.go:89] found id: ""
	I1213 08:57:51.680130   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.680136   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:51.680142   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:51.680199   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:51.703715   53550 cri.go:89] found id: ""
	I1213 08:57:51.703728   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.703734   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:51.703739   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:51.703798   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:51.728242   53550 cri.go:89] found id: ""
	I1213 08:57:51.728257   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.728263   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:51.728268   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:51.728334   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:51.752764   53550 cri.go:89] found id: ""
	I1213 08:57:51.752777   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.752783   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:51.752788   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:51.752845   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:51.776542   53550 cri.go:89] found id: ""
	I1213 08:57:51.776556   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.776562   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:51.776570   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:51.776583   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:51.809113   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:51.809129   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:51.868930   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:51.868948   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:51.879570   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:51.879594   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:51.948757   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:51.940427   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.940848   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.942781   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.943205   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.944832   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:51.940427   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.940848   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.942781   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.943205   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.944832   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:51.948767   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:51.948777   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:54.516634   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:54.526661   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:54.526738   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:54.553106   53550 cri.go:89] found id: ""
	I1213 08:57:54.553120   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.553126   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:54.553132   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:54.553190   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:54.581404   53550 cri.go:89] found id: ""
	I1213 08:57:54.581417   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.581426   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:54.581430   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:54.581484   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:54.605783   53550 cri.go:89] found id: ""
	I1213 08:57:54.605796   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.605803   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:54.605807   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:54.605862   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:54.634146   53550 cri.go:89] found id: ""
	I1213 08:57:54.634160   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.634167   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:54.634171   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:54.634227   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:54.658720   53550 cri.go:89] found id: ""
	I1213 08:57:54.658734   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.658741   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:54.658746   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:54.658803   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:54.683926   53550 cri.go:89] found id: ""
	I1213 08:57:54.683940   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.683947   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:54.683952   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:54.684011   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:54.712272   53550 cri.go:89] found id: ""
	I1213 08:57:54.712286   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.712293   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:54.712300   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:54.712312   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:54.769590   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:54.769607   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:54.781369   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:54.781386   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:54.846793   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:54.837938   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.838582   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840117   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840708   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.842407   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:54.837938   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.838582   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840117   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840708   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.842407   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:54.846803   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:54.846813   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:54.913758   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:54.913778   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:57.444332   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:57.453993   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:57.454058   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:57.478195   53550 cri.go:89] found id: ""
	I1213 08:57:57.478209   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.478225   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:57.478231   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:57.478301   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:57.502242   53550 cri.go:89] found id: ""
	I1213 08:57:57.502269   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.502277   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:57.502282   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:57.502346   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:57.525845   53550 cri.go:89] found id: ""
	I1213 08:57:57.525859   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.525867   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:57.525872   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:57.525931   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:57.549123   53550 cri.go:89] found id: ""
	I1213 08:57:57.549137   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.549143   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:57.549148   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:57.549203   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:57.576988   53550 cri.go:89] found id: ""
	I1213 08:57:57.577002   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.577009   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:57.577019   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:57.577076   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:57.599837   53550 cri.go:89] found id: ""
	I1213 08:57:57.599851   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.599858   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:57.599864   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:57.599932   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:57.623671   53550 cri.go:89] found id: ""
	I1213 08:57:57.623685   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.623693   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:57.623700   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:57.623711   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:57.634031   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:57.634046   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:57.695658   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:57.687110   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.688029   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.689755   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.690097   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.691681   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:57.687110   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.688029   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.689755   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.690097   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.691681   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:57.695668   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:57.695678   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:57.762393   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:57.762412   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:57.790711   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:57.790726   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:00.355817   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:00.372076   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:00.372142   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:00.409377   53550 cri.go:89] found id: ""
	I1213 08:58:00.409392   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.409398   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:00.409404   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:00.409467   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:00.436239   53550 cri.go:89] found id: ""
	I1213 08:58:00.436254   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.436261   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:00.436266   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:00.436326   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:00.461909   53550 cri.go:89] found id: ""
	I1213 08:58:00.461922   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.461929   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:00.461934   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:00.461991   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:00.491257   53550 cri.go:89] found id: ""
	I1213 08:58:00.491270   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.491276   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:00.491281   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:00.491339   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:00.517632   53550 cri.go:89] found id: ""
	I1213 08:58:00.517646   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.517658   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:00.517664   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:00.517726   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:00.543370   53550 cri.go:89] found id: ""
	I1213 08:58:00.543384   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.543391   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:00.543396   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:00.543460   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:00.568967   53550 cri.go:89] found id: ""
	I1213 08:58:00.568980   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.568987   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:00.568995   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:00.569005   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:00.636984   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:00.628336   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629304   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629910   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.631578   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.632039   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:00.628336   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629304   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629910   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.631578   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.632039   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:00.636994   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:00.637006   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:00.699893   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:00.699911   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:00.730182   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:00.730198   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:00.787828   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:00.787847   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:03.298762   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:03.310337   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:03.310399   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:03.345482   53550 cri.go:89] found id: ""
	I1213 08:58:03.345496   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.345503   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:03.345508   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:03.345568   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:03.370651   53550 cri.go:89] found id: ""
	I1213 08:58:03.370664   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.370671   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:03.370676   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:03.370730   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:03.393554   53550 cri.go:89] found id: ""
	I1213 08:58:03.393568   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.393574   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:03.393580   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:03.393638   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:03.418084   53550 cri.go:89] found id: ""
	I1213 08:58:03.418098   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.418105   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:03.418110   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:03.418180   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:03.442426   53550 cri.go:89] found id: ""
	I1213 08:58:03.442440   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.442447   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:03.442451   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:03.442510   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:03.467378   53550 cri.go:89] found id: ""
	I1213 08:58:03.467391   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.467398   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:03.467404   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:03.467539   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:03.493640   53550 cri.go:89] found id: ""
	I1213 08:58:03.493653   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.493660   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:03.493668   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:03.493678   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:03.559295   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:03.551013   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.551897   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553496   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553816   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.555334   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:03.551013   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.551897   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553496   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553816   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.555334   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:03.559305   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:03.559315   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:03.622616   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:03.622633   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:03.656517   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:03.656534   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:03.715111   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:03.715131   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:06.226614   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:06.237139   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:06.237200   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:06.261636   53550 cri.go:89] found id: ""
	I1213 08:58:06.261652   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.261659   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:06.261664   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:06.261727   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:06.293692   53550 cri.go:89] found id: ""
	I1213 08:58:06.293707   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.293714   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:06.293719   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:06.293778   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:06.321565   53550 cri.go:89] found id: ""
	I1213 08:58:06.321578   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.321584   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:06.321589   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:06.321643   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:06.348809   53550 cri.go:89] found id: ""
	I1213 08:58:06.348856   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.348862   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:06.348869   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:06.348925   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:06.378146   53550 cri.go:89] found id: ""
	I1213 08:58:06.378159   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.378166   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:06.378171   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:06.378227   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:06.402993   53550 cri.go:89] found id: ""
	I1213 08:58:06.403006   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.403013   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:06.403019   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:06.403074   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:06.429062   53550 cri.go:89] found id: ""
	I1213 08:58:06.429076   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.429084   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:06.429092   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:06.429102   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:06.485200   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:06.485218   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:06.496017   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:06.496033   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:06.561266   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:06.552621   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.553412   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.554963   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.555537   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.557278   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:06.552621   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.553412   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.554963   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.555537   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.557278   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:06.561275   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:06.561299   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:06.624429   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:06.624451   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:09.152326   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:09.162496   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:09.162552   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:09.187570   53550 cri.go:89] found id: ""
	I1213 08:58:09.187583   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.187590   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:09.187595   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:09.187653   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:09.211361   53550 cri.go:89] found id: ""
	I1213 08:58:09.211375   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.211382   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:09.211387   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:09.211441   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:09.240289   53550 cri.go:89] found id: ""
	I1213 08:58:09.240302   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.240310   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:09.240316   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:09.240381   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:09.263680   53550 cri.go:89] found id: ""
	I1213 08:58:09.263694   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.263701   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:09.263706   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:09.263767   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:09.289437   53550 cri.go:89] found id: ""
	I1213 08:58:09.289451   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.289458   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:09.289463   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:09.289524   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:09.323385   53550 cri.go:89] found id: ""
	I1213 08:58:09.323398   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.323405   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:09.323410   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:09.323467   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:09.353577   53550 cri.go:89] found id: ""
	I1213 08:58:09.353590   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.353597   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:09.353605   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:09.353616   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:09.382787   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:09.382803   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:09.449042   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:09.449060   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:09.460226   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:09.460242   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:09.528091   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:09.519747   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.520489   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522369   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522869   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.524292   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:09.519747   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.520489   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522369   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522869   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.524292   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:09.528102   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:09.528112   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:12.097937   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:12.108009   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:12.108068   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:12.131531   53550 cri.go:89] found id: ""
	I1213 08:58:12.131546   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.131553   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:12.131558   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:12.131621   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:12.161149   53550 cri.go:89] found id: ""
	I1213 08:58:12.161163   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.161170   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:12.161175   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:12.161237   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:12.187318   53550 cri.go:89] found id: ""
	I1213 08:58:12.187332   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.187339   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:12.187344   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:12.187400   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:12.212736   53550 cri.go:89] found id: ""
	I1213 08:58:12.212749   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.212756   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:12.212761   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:12.212818   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:12.236946   53550 cri.go:89] found id: ""
	I1213 08:58:12.236959   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.236967   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:12.236973   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:12.237036   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:12.260663   53550 cri.go:89] found id: ""
	I1213 08:58:12.260677   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.260683   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:12.260690   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:12.260746   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:12.292004   53550 cri.go:89] found id: ""
	I1213 08:58:12.292022   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.292030   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:12.292038   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:12.292055   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:12.338118   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:12.338134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:12.397489   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:12.397527   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:12.408810   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:12.408834   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:12.471195   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:12.462890   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.463457   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465096   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465641   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.467207   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:12.462890   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.463457   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465096   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465641   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.467207   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:12.471207   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:12.471217   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:15.035075   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:15.046491   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:15.046557   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:15.073355   53550 cri.go:89] found id: ""
	I1213 08:58:15.073368   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.073375   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:15.073381   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:15.073444   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:15.098531   53550 cri.go:89] found id: ""
	I1213 08:58:15.098545   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.098553   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:15.098558   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:15.098620   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:15.125009   53550 cri.go:89] found id: ""
	I1213 08:58:15.125024   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.125031   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:15.125036   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:15.125096   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:15.150565   53550 cri.go:89] found id: ""
	I1213 08:58:15.150579   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.150586   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:15.150591   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:15.150650   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:15.176538   53550 cri.go:89] found id: ""
	I1213 08:58:15.176552   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.176559   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:15.176564   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:15.176622   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:15.200435   53550 cri.go:89] found id: ""
	I1213 08:58:15.200449   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.200472   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:15.200477   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:15.200554   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:15.224596   53550 cri.go:89] found id: ""
	I1213 08:58:15.224610   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.224617   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:15.224625   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:15.224636   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:15.299267   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:15.288531   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.289386   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292038   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292350   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.293775   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:15.288531   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.289386   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292038   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292350   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.293775   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:15.299277   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:15.299287   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:15.370114   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:15.370160   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:15.400555   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:15.400569   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:15.458044   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:15.458062   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:17.970291   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:17.980756   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:17.980816   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:18.009453   53550 cri.go:89] found id: ""
	I1213 08:58:18.009470   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.009478   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:18.009483   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:18.009912   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:18.040545   53550 cri.go:89] found id: ""
	I1213 08:58:18.040560   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.040567   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:18.040572   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:18.040634   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:18.065694   53550 cri.go:89] found id: ""
	I1213 08:58:18.065711   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.065721   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:18.065727   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:18.065795   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:18.091133   53550 cri.go:89] found id: ""
	I1213 08:58:18.091147   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.091155   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:18.091169   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:18.091228   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:18.118236   53550 cri.go:89] found id: ""
	I1213 08:58:18.118250   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.118257   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:18.118262   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:18.118321   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:18.141948   53550 cri.go:89] found id: ""
	I1213 08:58:18.141961   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.141968   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:18.141974   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:18.142030   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:18.167116   53550 cri.go:89] found id: ""
	I1213 08:58:18.167130   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.167137   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:18.167145   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:18.167158   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:18.242811   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:18.234010   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.235596   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.236202   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237378   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237833   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:18.234010   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.235596   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.236202   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237378   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237833   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:18.242822   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:18.242833   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:18.314955   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:18.314974   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:18.343207   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:18.343222   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:18.398868   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:18.398887   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:20.911155   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:20.921270   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:20.921329   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:20.949336   53550 cri.go:89] found id: ""
	I1213 08:58:20.949350   53550 logs.go:282] 0 containers: []
	W1213 08:58:20.949356   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:20.949361   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:20.949418   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:20.973382   53550 cri.go:89] found id: ""
	I1213 08:58:20.973395   53550 logs.go:282] 0 containers: []
	W1213 08:58:20.973402   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:20.973408   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:20.973470   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:21.009413   53550 cri.go:89] found id: ""
	I1213 08:58:21.009431   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.009439   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:21.009444   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:21.009508   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:21.038840   53550 cri.go:89] found id: ""
	I1213 08:58:21.038898   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.038906   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:21.038913   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:21.038981   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:21.062283   53550 cri.go:89] found id: ""
	I1213 08:58:21.062296   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.062303   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:21.062308   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:21.062430   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:21.086629   53550 cri.go:89] found id: ""
	I1213 08:58:21.086643   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.086650   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:21.086655   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:21.086725   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:21.113708   53550 cri.go:89] found id: ""
	I1213 08:58:21.113722   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.113729   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:21.113737   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:21.113749   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:21.169462   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:21.169481   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:21.180306   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:21.180328   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:21.242376   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:21.233989   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.234781   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236321   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236625   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.238091   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:21.233989   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.234781   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236321   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236625   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.238091   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:21.242386   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:21.242400   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:21.306044   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:21.306063   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:23.838510   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:23.848550   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:23.848611   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:23.878675   53550 cri.go:89] found id: ""
	I1213 08:58:23.878689   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.878697   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:23.878702   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:23.878770   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:23.904045   53550 cri.go:89] found id: ""
	I1213 08:58:23.904060   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.904067   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:23.904072   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:23.904142   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:23.929949   53550 cri.go:89] found id: ""
	I1213 08:58:23.929963   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.929970   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:23.929975   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:23.930035   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:23.955048   53550 cri.go:89] found id: ""
	I1213 08:58:23.955062   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.955069   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:23.955078   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:23.955136   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:23.979633   53550 cri.go:89] found id: ""
	I1213 08:58:23.979647   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.979654   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:23.979659   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:23.979716   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:24.006479   53550 cri.go:89] found id: ""
	I1213 08:58:24.006495   53550 logs.go:282] 0 containers: []
	W1213 08:58:24.006503   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:24.006520   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:24.006593   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:24.033349   53550 cri.go:89] found id: ""
	I1213 08:58:24.033369   53550 logs.go:282] 0 containers: []
	W1213 08:58:24.033376   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:24.033385   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:24.033395   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:24.060616   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:24.060635   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:24.119305   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:24.119324   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:24.130335   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:24.130350   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:24.197036   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:24.188772   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.189749   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191420   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191829   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.193342   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:24.188772   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.189749   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191420   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191829   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.193342   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:24.197046   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:24.197058   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:26.764306   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:26.775859   53550 kubeadm.go:602] duration metric: took 4m4.554296141s to restartPrimaryControlPlane
	W1213 08:58:26.775922   53550 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 08:58:26.776056   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 08:58:27.191363   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:58:27.204546   53550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:58:27.212501   53550 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 08:58:27.212553   53550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:58:27.220364   53550 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 08:58:27.220373   53550 kubeadm.go:158] found existing configuration files:
	
	I1213 08:58:27.220423   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 08:58:27.228123   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 08:58:27.228179   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 08:58:27.235737   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 08:58:27.243839   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 08:58:27.243909   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:58:27.251406   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 08:58:27.259128   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 08:58:27.259197   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:58:27.266406   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 08:58:27.274290   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 08:58:27.274347   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:58:27.281913   53550 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 08:58:27.321302   53550 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 08:58:27.321349   53550 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 08:58:27.394605   53550 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 08:58:27.394672   53550 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 08:58:27.394706   53550 kubeadm.go:319] OS: Linux
	I1213 08:58:27.394750   53550 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 08:58:27.394798   53550 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 08:58:27.394844   53550 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 08:58:27.394891   53550 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 08:58:27.394938   53550 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 08:58:27.394984   53550 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 08:58:27.395028   53550 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 08:58:27.395075   53550 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 08:58:27.395120   53550 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 08:58:27.462440   53550 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 08:58:27.462546   53550 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 08:58:27.462635   53550 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 08:58:27.476078   53550 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 08:58:27.481288   53550 out.go:252]   - Generating certificates and keys ...
	I1213 08:58:27.481378   53550 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 08:58:27.481454   53550 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 08:58:27.481542   53550 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 08:58:27.481611   53550 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 08:58:27.481690   53550 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 08:58:27.481750   53550 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 08:58:27.481822   53550 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 08:58:27.481892   53550 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 08:58:27.481974   53550 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 08:58:27.482055   53550 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 08:58:27.482101   53550 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 08:58:27.482165   53550 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 08:58:27.905850   53550 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 08:58:28.178703   53550 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 08:58:28.541521   53550 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 08:58:28.686915   53550 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 08:58:29.281245   53550 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 08:58:29.281953   53550 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 08:58:29.285342   53550 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 08:58:29.288544   53550 out.go:252]   - Booting up control plane ...
	I1213 08:58:29.288640   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 08:58:29.288718   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 08:58:29.289378   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 08:58:29.310312   53550 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 08:58:29.310629   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 08:58:29.318324   53550 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 08:58:29.318581   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 08:58:29.318622   53550 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 08:58:29.457400   53550 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 08:58:29.457506   53550 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:02:29.458561   53550 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001216357s
	I1213 09:02:29.458592   53550 kubeadm.go:319] 
	I1213 09:02:29.458674   53550 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:02:29.458746   53550 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:02:29.458876   53550 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:02:29.458882   53550 kubeadm.go:319] 
	I1213 09:02:29.458995   53550 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:02:29.459029   53550 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:02:29.459061   53550 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:02:29.459065   53550 kubeadm.go:319] 
	I1213 09:02:29.463013   53550 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:02:29.463412   53550 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:02:29.463534   53550 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:02:29.463755   53550 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:02:29.463760   53550 kubeadm.go:319] 
	I1213 09:02:29.463824   53550 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 09:02:29.463944   53550 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216357s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 09:02:29.464028   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 09:02:29.874512   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:02:29.888184   53550 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:02:29.888240   53550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:02:29.896053   53550 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:02:29.896063   53550 kubeadm.go:158] found existing configuration files:
	
	I1213 09:02:29.896114   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:02:29.904008   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:02:29.904062   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:02:29.911453   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:02:29.919369   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:02:29.919421   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:02:29.927024   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:02:29.934996   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:02:29.935050   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:02:29.942367   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:02:29.949946   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:02:29.950000   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:02:29.957647   53550 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:02:29.995750   53550 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:02:29.995800   53550 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:02:30.116553   53550 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:02:30.116615   53550 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 09:02:30.116649   53550 kubeadm.go:319] OS: Linux
	I1213 09:02:30.116693   53550 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:02:30.116740   53550 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:02:30.116785   53550 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:02:30.116832   53550 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:02:30.116879   53550 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:02:30.116934   53550 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:02:30.116978   53550 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:02:30.117024   53550 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:02:30.117071   53550 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:02:30.188905   53550 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:02:30.189016   53550 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:02:30.189118   53550 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:02:30.196039   53550 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:02:30.201335   53550 out.go:252]   - Generating certificates and keys ...
	I1213 09:02:30.201440   53550 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:02:30.201521   53550 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:02:30.201609   53550 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:02:30.201670   53550 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:02:30.201747   53550 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:02:30.201835   53550 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:02:30.201908   53550 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:02:30.201970   53550 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:02:30.202045   53550 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:02:30.202116   53550 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:02:30.202153   53550 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:02:30.202209   53550 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:02:30.255550   53550 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:02:30.417221   53550 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:02:30.868435   53550 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:02:31.140633   53550 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:02:31.298069   53550 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:02:31.298995   53550 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:02:31.302412   53550 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:02:31.305750   53550 out.go:252]   - Booting up control plane ...
	I1213 09:02:31.305854   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:02:31.305930   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:02:31.305995   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:02:31.327053   53550 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:02:31.327169   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:02:31.334414   53550 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:02:31.334677   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:02:31.334719   53550 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:02:31.474852   53550 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:02:31.474965   53550 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:06:31.473943   53550 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000237859s
	I1213 09:06:31.473980   53550 kubeadm.go:319] 
	I1213 09:06:31.474081   53550 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:06:31.474292   53550 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:06:31.474479   53550 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:06:31.474488   53550 kubeadm.go:319] 
	I1213 09:06:31.474674   53550 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:06:31.474967   53550 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:06:31.475021   53550 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:06:31.475025   53550 kubeadm.go:319] 
	I1213 09:06:31.479982   53550 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:06:31.480734   53550 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:06:31.480923   53550 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:06:31.481347   53550 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:06:31.481355   53550 kubeadm.go:319] 
	I1213 09:06:31.481475   53550 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:06:31.481540   53550 kubeadm.go:403] duration metric: took 12m9.29303151s to StartCluster
	I1213 09:06:31.481569   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:06:31.481637   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:06:31.505490   53550 cri.go:89] found id: ""
	I1213 09:06:31.505505   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.505511   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:06:31.505516   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:06:31.505576   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:06:31.533408   53550 cri.go:89] found id: ""
	I1213 09:06:31.533422   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.533429   53550 logs.go:284] No container was found matching "etcd"
	I1213 09:06:31.533433   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:06:31.533495   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:06:31.563195   53550 cri.go:89] found id: ""
	I1213 09:06:31.563218   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.563225   53550 logs.go:284] No container was found matching "coredns"
	I1213 09:06:31.563230   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:06:31.563288   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:06:31.588179   53550 cri.go:89] found id: ""
	I1213 09:06:31.588192   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.588199   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:06:31.588204   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:06:31.588262   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:06:31.613124   53550 cri.go:89] found id: ""
	I1213 09:06:31.613137   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.613144   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:06:31.613149   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:06:31.613204   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:06:31.637268   53550 cri.go:89] found id: ""
	I1213 09:06:31.637282   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.637297   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:06:31.637303   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:06:31.637360   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:06:31.661188   53550 cri.go:89] found id: ""
	I1213 09:06:31.661208   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.661214   53550 logs.go:284] No container was found matching "kindnet"
	I1213 09:06:31.661223   53550 logs.go:123] Gathering logs for container status ...
	I1213 09:06:31.661232   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:06:31.690241   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 09:06:31.690257   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:06:31.745899   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 09:06:31.745917   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:06:31.756123   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:06:31.756137   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:06:31.847485   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:06:31.839464   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.840155   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.841679   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.842167   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.843750   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:06:31.839464   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.840155   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.841679   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.842167   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.843750   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:06:31.847496   53550 logs.go:123] Gathering logs for containerd ...
	I1213 09:06:31.847506   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1213 09:06:31.908510   53550 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 09:06:31.908551   53550 out.go:285] * 
	W1213 09:06:31.908654   53550 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:06:31.908704   53550 out.go:285] * 
	W1213 09:06:31.910815   53550 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:06:31.916295   53550 out.go:203] 
	W1213 09:06:31.920097   53550 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:06:31.920144   53550 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 09:06:31.920163   53550 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 09:06:31.923856   53550 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604408682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604420440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604455862Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604474439Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604487756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604500244Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604511765Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604526116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604543576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604572007Z" level=info msg="Connect containerd service"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604858731Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.605398020Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621543154Z" level=info msg="Start subscribing containerd event"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621557350Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621826530Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621766673Z" level=info msg="Start recovering state"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663604117Z" level=info msg="Start event monitor"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663796472Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663882619Z" level=info msg="Start streaming server"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663973525Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664032545Z" level=info msg="runtime interface starting up..."
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664089374Z" level=info msg="starting plugins..."
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664157059Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 08:54:20 functional-074420 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.666240376Z" level=info msg="containerd successfully booted in 0.083219s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:06:35.403818   21191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:35.404313   21191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:35.406014   21191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:35.406313   21191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:35.407888   21191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 09:06:35 up 49 min,  0 user,  load average: 0.05, 0.18, 0.33
	Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:06:31 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:06:32 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 09:06:32 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:32 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:32 functional-074420 kubelet[20967]: E1213 09:06:32.593960   20967 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:06:32 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:06:32 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:06:33 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 09:06:33 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:33 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:33 functional-074420 kubelet[21063]: E1213 09:06:33.342441   21063 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:06:33 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:06:33 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:06:34 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 09:06:34 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:34 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:34 functional-074420 kubelet[21080]: E1213 09:06:34.088418   21080 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:06:34 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:06:34 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:06:34 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 13 09:06:34 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:34 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:34 functional-074420 kubelet[21105]: E1213 09:06:34.840430   21105 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:06:34 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:06:34 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 2 (394.95531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-074420 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-074420 apply -f testdata/invalidsvc.yaml: exit status 1 (53.319462ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-074420 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-074420 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-074420 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-074420 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-074420 --alsologtostderr -v=1] stderr:
I1213 09:08:35.497013   70894 out.go:360] Setting OutFile to fd 1 ...
I1213 09:08:35.497155   70894 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:35.497173   70894 out.go:374] Setting ErrFile to fd 2...
I1213 09:08:35.497188   70894 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:35.497561   70894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 09:08:35.497889   70894 mustload.go:66] Loading cluster: functional-074420
I1213 09:08:35.499093   70894 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:08:35.499867   70894 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
I1213 09:08:35.517155   70894 host.go:66] Checking if "functional-074420" exists ...
I1213 09:08:35.517479   70894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 09:08:35.569838   70894 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:08:35.560723916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 09:08:35.569954   70894 api_server.go:166] Checking apiserver status ...
I1213 09:08:35.570012   70894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 09:08:35.570054   70894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 09:08:35.587123   70894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
W1213 09:08:35.692910   70894 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 09:08:35.695950   70894 out.go:179] * The control-plane node functional-074420 apiserver is not running: (state=Stopped)
I1213 09:08:35.699017   70894 out.go:179]   To start a cluster, run: "minikube start -p functional-074420"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	        "Created": "2025-12-13T08:39:40.050933605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:39:40.114566965Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
	        "Name": "/functional-074420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	                "LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-074420",
	                "Source": "/var/lib/docker/volumes/functional-074420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074420",
	                "name.minikube.sigs.k8s.io": "functional-074420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
	            "SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e0:c5:f8:aa:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
	                    "EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074420",
	                        "662fb3d52b3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 2 (315.918779ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-074420 service hello-node --url                                                                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ mount     │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001:/mount-9p --alsologtostderr -v=1               │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh -- ls -la /mount-9p                                                                                                           │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh cat /mount-9p/test-1765616905262471887                                                                                        │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh sudo umount -f /mount-9p                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ mount     │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1439982161/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh -- ls -la /mount-9p                                                                                                           │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh sudo umount -f /mount-9p                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ mount     │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount1 --alsologtostderr -v=1                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh findmnt -T /mount1                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ mount     │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount3 --alsologtostderr -v=1                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ mount     │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount2 --alsologtostderr -v=1                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh findmnt -T /mount1                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh findmnt -T /mount2                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh findmnt -T /mount3                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ mount     │ -p functional-074420 --kill=true                                                                                                                    │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ start     │ -p functional-074420 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ start     │ -p functional-074420 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ start     │ -p functional-074420 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-074420 --alsologtostderr -v=1                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:08:35
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:08:35.252578   70821 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:08:35.252689   70821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:08:35.252702   70821 out.go:374] Setting ErrFile to fd 2...
	I1213 09:08:35.252708   70821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:08:35.252952   70821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:08:35.253296   70821 out.go:368] Setting JSON to false
	I1213 09:08:35.254045   70821 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3068,"bootTime":1765613848,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:08:35.254113   70821 start.go:143] virtualization:  
	I1213 09:08:35.257261   70821 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:08:35.260972   70821 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:08:35.261121   70821 notify.go:221] Checking for updates...
	I1213 09:08:35.266518   70821 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:08:35.269328   70821 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:08:35.272289   70821 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:08:35.275299   70821 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:08:35.278118   70821 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:08:35.281471   70821 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:08:35.282073   70821 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:08:35.321729   70821 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:08:35.321869   70821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:08:35.382331   70821 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:08:35.373170311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:08:35.382437   70821 docker.go:319] overlay module found
	I1213 09:08:35.387422   70821 out.go:179] * Using the docker driver based on existing profile
	I1213 09:08:35.390338   70821 start.go:309] selected driver: docker
	I1213 09:08:35.390361   70821 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:08:35.390461   70821 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:08:35.390572   70821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:08:35.442520   70821 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:08:35.433498577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:08:35.442956   70821 cni.go:84] Creating CNI manager for ""
	I1213 09:08:35.443020   70821 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:08:35.443065   70821 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:08:35.446147   70821 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604408682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604420440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604455862Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604474439Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604487756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604500244Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604511765Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604526116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604543576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604572007Z" level=info msg="Connect containerd service"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604858731Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.605398020Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621543154Z" level=info msg="Start subscribing containerd event"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621557350Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621826530Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621766673Z" level=info msg="Start recovering state"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663604117Z" level=info msg="Start event monitor"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663796472Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663882619Z" level=info msg="Start streaming server"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663973525Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664032545Z" level=info msg="runtime interface starting up..."
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664089374Z" level=info msg="starting plugins..."
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664157059Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 08:54:20 functional-074420 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.666240376Z" level=info msg="containerd successfully booted in 0.083219s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:36.744703   23243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:36.745127   23243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:36.746746   23243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:36.747349   23243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:36.749177   23243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 09:08:36 up 51 min,  0 user,  load average: 0.72, 0.31, 0.36
	Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:08:33 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:34 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 13 09:08:34 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:34 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:34 functional-074420 kubelet[23104]: E1213 09:08:34.332785   23104 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:34 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:34 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:35 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 13 09:08:35 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:35 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:35 functional-074420 kubelet[23125]: E1213 09:08:35.091576   23125 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:35 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:35 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:35 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 13 09:08:35 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:35 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:35 functional-074420 kubelet[23140]: E1213 09:08:35.828496   23140 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:35 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:35 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:36 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 13 09:08:36 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:36 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:36 functional-074420 kubelet[23200]: E1213 09:08:36.584914   23200 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:36 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:36 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 2 (304.148654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (2.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 status: exit status 2 (316.710461ms)

                                                
                                                
-- stdout --
	functional-074420
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-074420 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (301.583976ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-074420 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 status -o json: exit status 2 (307.122446ms)

                                                
                                                
-- stdout --
	{"Name":"functional-074420","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-074420 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	        "Created": "2025-12-13T08:39:40.050933605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:39:40.114566965Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
	        "Name": "/functional-074420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	                "LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-074420",
	                "Source": "/var/lib/docker/volumes/functional-074420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074420",
	                "name.minikube.sigs.k8s.io": "functional-074420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
	            "SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e0:c5:f8:aa:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
	                    "EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074420",
	                        "662fb3d52b3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 2 (333.396492ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-074420 service list                                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ service │ functional-074420 service list -o json                                                                                                              │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ service │ functional-074420 service --namespace=default --https --url hello-node                                                                              │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ service │ functional-074420 service hello-node --url --format={{.IP}}                                                                                         │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ service │ functional-074420 service hello-node --url                                                                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh     │ functional-074420 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ mount   │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001:/mount-9p --alsologtostderr -v=1               │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh     │ functional-074420 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh     │ functional-074420 ssh -- ls -la /mount-9p                                                                                                           │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh     │ functional-074420 ssh cat /mount-9p/test-1765616905262471887                                                                                        │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh     │ functional-074420 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh     │ functional-074420 ssh sudo umount -f /mount-9p                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ mount   │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1439982161/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh     │ functional-074420 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh     │ functional-074420 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh     │ functional-074420 ssh -- ls -la /mount-9p                                                                                                           │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh     │ functional-074420 ssh sudo umount -f /mount-9p                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ mount   │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount1 --alsologtostderr -v=1                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh     │ functional-074420 ssh findmnt -T /mount1                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ mount   │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount3 --alsologtostderr -v=1                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ mount   │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount2 --alsologtostderr -v=1                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh     │ functional-074420 ssh findmnt -T /mount1                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh     │ functional-074420 ssh findmnt -T /mount2                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh     │ functional-074420 ssh findmnt -T /mount3                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ mount   │ -p functional-074420 --kill=true                                                                                                                    │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:54:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:54:17.881015   53550 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:54:17.881119   53550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:54:17.881124   53550 out.go:374] Setting ErrFile to fd 2...
	I1213 08:54:17.881127   53550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:54:17.881367   53550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:54:17.881711   53550 out.go:368] Setting JSON to false
	I1213 08:54:17.882486   53550 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2210,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:54:17.882543   53550 start.go:143] virtualization:  
	I1213 08:54:17.885916   53550 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:54:17.888999   53550 notify.go:221] Checking for updates...
	I1213 08:54:17.889435   53550 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:54:17.892383   53550 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:54:17.895200   53550 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:54:17.898042   53550 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:54:17.900839   53550 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 08:54:17.903626   53550 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:54:17.906955   53550 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:54:17.907037   53550 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:54:17.945038   53550 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:54:17.945157   53550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:54:18.004102   53550 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 08:54:17.99317471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:54:18.004214   53550 docker.go:319] overlay module found
	I1213 08:54:18.009730   53550 out.go:179] * Using the docker driver based on existing profile
	I1213 08:54:18.012694   53550 start.go:309] selected driver: docker
	I1213 08:54:18.012706   53550 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:54:18.012816   53550 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:54:18.012919   53550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:54:18.070601   53550 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 08:54:18.060838365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:54:18.071017   53550 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 08:54:18.071040   53550 cni.go:84] Creating CNI manager for ""
	I1213 08:54:18.071105   53550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:54:18.071147   53550 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:54:18.074420   53550 out.go:179] * Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	I1213 08:54:18.077242   53550 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 08:54:18.080227   53550 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:54:18.083176   53550 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:54:18.083216   53550 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 08:54:18.083225   53550 cache.go:65] Caching tarball of preloaded images
	I1213 08:54:18.083262   53550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:54:18.083328   53550 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 08:54:18.083337   53550 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 08:54:18.083454   53550 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
	I1213 08:54:18.104039   53550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:54:18.104049   53550 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:54:18.104071   53550 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:54:18.104097   53550 start.go:360] acquireMachinesLock for functional-074420: {Name:mk9a8356bf81e58530d2c2996b4da0b7487171c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:54:18.104173   53550 start.go:364] duration metric: took 60.013µs to acquireMachinesLock for "functional-074420"
	I1213 08:54:18.104193   53550 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:54:18.104198   53550 fix.go:54] fixHost starting: 
	I1213 08:54:18.104469   53550 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:54:18.121469   53550 fix.go:112] recreateIfNeeded on functional-074420: state=Running err=<nil>
	W1213 08:54:18.121489   53550 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:54:18.124664   53550 out.go:252] * Updating the running docker "functional-074420" container ...
	I1213 08:54:18.124700   53550 machine.go:94] provisionDockerMachine start ...
	I1213 08:54:18.124779   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.142221   53550 main.go:143] libmachine: Using SSH client type: native
	I1213 08:54:18.142535   53550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:54:18.142542   53550 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:54:18.290889   53550 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:54:18.290902   53550 ubuntu.go:182] provisioning hostname "functional-074420"
	I1213 08:54:18.290965   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.308398   53550 main.go:143] libmachine: Using SSH client type: native
	I1213 08:54:18.308699   53550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:54:18.308706   53550 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-074420 && echo "functional-074420" | sudo tee /etc/hostname
	I1213 08:54:18.463898   53550 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:54:18.463977   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.481808   53550 main.go:143] libmachine: Using SSH client type: native
	I1213 08:54:18.482113   53550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:54:18.482128   53550 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-074420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-074420/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-074420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:54:18.639897   53550 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:54:18.639913   53550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 08:54:18.639945   53550 ubuntu.go:190] setting up certificates
	I1213 08:54:18.639960   53550 provision.go:84] configureAuth start
	I1213 08:54:18.640021   53550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:54:18.657069   53550 provision.go:143] copyHostCerts
	I1213 08:54:18.657137   53550 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 08:54:18.657145   53550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:54:18.657224   53550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 08:54:18.657317   53550 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 08:54:18.657321   53550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:54:18.657345   53550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 08:54:18.657393   53550 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 08:54:18.657396   53550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:54:18.657421   53550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 08:54:18.657462   53550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.functional-074420 san=[127.0.0.1 192.168.49.2 functional-074420 localhost minikube]
	I1213 08:54:18.978851   53550 provision.go:177] copyRemoteCerts
	I1213 08:54:18.978913   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:54:18.978954   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.996497   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.099309   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:54:19.116489   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:54:19.134491   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 08:54:19.152584   53550 provision.go:87] duration metric: took 512.603195ms to configureAuth
	I1213 08:54:19.152601   53550 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:54:19.152798   53550 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:54:19.152804   53550 machine.go:97] duration metric: took 1.028099835s to provisionDockerMachine
	I1213 08:54:19.152810   53550 start.go:293] postStartSetup for "functional-074420" (driver="docker")
	I1213 08:54:19.152820   53550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:54:19.152868   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:54:19.152914   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.170238   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.275637   53550 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:54:19.280193   53550 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:54:19.280211   53550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:54:19.280223   53550 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 08:54:19.280276   53550 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 08:54:19.280348   53550 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 08:54:19.280419   53550 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1213 08:54:19.280458   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1213 08:54:19.288420   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:54:19.306689   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1213 08:54:19.324595   53550 start.go:296] duration metric: took 171.770829ms for postStartSetup
	I1213 08:54:19.324673   53550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:54:19.324742   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.347206   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.449063   53550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:54:19.453865   53550 fix.go:56] duration metric: took 1.349660427s for fixHost
	I1213 08:54:19.453881   53550 start.go:83] releasing machines lock for "functional-074420", held for 1.349700469s
	I1213 08:54:19.453945   53550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:54:19.471349   53550 ssh_runner.go:195] Run: cat /version.json
	I1213 08:54:19.471396   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.471420   53550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:54:19.471481   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.492979   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.505163   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.686546   53550 ssh_runner.go:195] Run: systemctl --version
	I1213 08:54:19.692986   53550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 08:54:19.697303   53550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:54:19.697365   53550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:54:19.705133   53550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:54:19.705146   53550 start.go:496] detecting cgroup driver to use...
	I1213 08:54:19.705176   53550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:54:19.705226   53550 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 08:54:19.720729   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:54:19.733460   53550 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:54:19.733514   53550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:54:19.748695   53550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:54:19.761831   53550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:54:19.870034   53550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:54:19.996014   53550 docker.go:234] disabling docker service ...
	I1213 08:54:19.996078   53550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:54:20.014799   53550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:54:20.030104   53550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:54:20.162441   53550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:54:20.283014   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:54:20.297184   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:54:20.311847   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:54:20.321141   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 08:54:20.330609   53550 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:54:20.330677   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:54:20.339444   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:54:20.348072   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:54:20.356752   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:54:20.365663   53550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:54:20.373861   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:54:20.383214   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:54:20.392296   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:54:20.401182   53550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:54:20.408521   53550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:54:20.415857   53550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:54:20.524736   53550 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:54:20.667475   53550 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 08:54:20.667553   53550 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 08:54:20.671249   53550 start.go:564] Will wait 60s for crictl version
	I1213 08:54:20.671308   53550 ssh_runner.go:195] Run: which crictl
	I1213 08:54:20.674869   53550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:54:20.699246   53550 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 08:54:20.699301   53550 ssh_runner.go:195] Run: containerd --version
	I1213 08:54:20.723418   53550 ssh_runner.go:195] Run: containerd --version
	I1213 08:54:20.748134   53550 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 08:54:20.751095   53550 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:54:20.766935   53550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 08:54:20.773949   53550 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 08:54:20.776880   53550 kubeadm.go:884] updating cluster {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:54:20.777036   53550 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:54:20.777116   53550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:54:20.804622   53550 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:54:20.804634   53550 containerd.go:534] Images already preloaded, skipping extraction
	I1213 08:54:20.804691   53550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:54:20.834431   53550 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:54:20.834444   53550 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:54:20.834451   53550 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 08:54:20.834559   53550 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-074420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:54:20.834624   53550 ssh_runner.go:195] Run: sudo crictl info
	I1213 08:54:20.867174   53550 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 08:54:20.867192   53550 cni.go:84] Creating CNI manager for ""
	I1213 08:54:20.867200   53550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:54:20.867220   53550 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:54:20.867242   53550 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-074420 NodeName:functional-074420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:54:20.867356   53550 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-074420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:54:20.867422   53550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:54:20.875127   53550 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:54:20.875185   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:54:20.882880   53550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 08:54:20.898646   53550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:54:20.911841   53550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 08:54:20.925067   53550 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:54:20.928972   53550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:54:21.047902   53550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:54:21.521591   53550 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420 for IP: 192.168.49.2
	I1213 08:54:21.521603   53550 certs.go:195] generating shared ca certs ...
	I1213 08:54:21.521617   53550 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:54:21.521756   53550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 08:54:21.521796   53550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 08:54:21.521802   53550 certs.go:257] generating profile certs ...
	I1213 08:54:21.521883   53550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key
	I1213 08:54:21.521933   53550 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068
	I1213 08:54:21.521973   53550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key
	I1213 08:54:21.522082   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 08:54:21.522113   53550 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 08:54:21.522120   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:54:21.522146   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 08:54:21.522168   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:54:21.522190   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 08:54:21.522232   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:54:21.522796   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:54:21.547463   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:54:21.565502   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:54:21.583029   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 08:54:21.600675   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:54:21.617821   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:54:21.634794   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:54:21.652088   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:54:21.669338   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:54:21.685563   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 08:54:21.702834   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 08:54:21.719220   53550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:54:21.731588   53550 ssh_runner.go:195] Run: openssl version
	I1213 08:54:21.737357   53550 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.744365   53550 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:54:21.751316   53550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.754910   53550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.754961   53550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.795815   53550 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:54:21.802933   53550 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.809987   53550 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 08:54:21.817141   53550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.820600   53550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.820668   53550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.861349   53550 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:54:21.868464   53550 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.875279   53550 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 08:54:21.882257   53550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.885950   53550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.886012   53550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.927672   53550 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:54:21.934830   53550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:54:21.938562   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:54:21.979443   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:54:22.023588   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:54:22.065341   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:54:22.106598   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:54:22.147410   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:54:22.188516   53550 kubeadm.go:401] StartCluster: {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:54:22.188592   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 08:54:22.188655   53550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:54:22.213570   53550 cri.go:89] found id: ""
	I1213 08:54:22.213647   53550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:54:22.221547   53550 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:54:22.221555   53550 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:54:22.221616   53550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:54:22.229060   53550 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.229555   53550 kubeconfig.go:125] found "functional-074420" server: "https://192.168.49.2:8441"
	I1213 08:54:22.232016   53550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:54:22.239904   53550 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 08:39:47.751417218 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 08:54:20.919594824 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 08:54:22.239924   53550 kubeadm.go:1161] stopping kube-system containers ...
	I1213 08:54:22.239936   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 08:54:22.239998   53550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:54:22.266484   53550 cri.go:89] found id: ""
	I1213 08:54:22.266565   53550 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 08:54:22.285823   53550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:54:22.293457   53550 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 13 08:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 08:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 08:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 08:43 /etc/kubernetes/scheduler.conf
	
	I1213 08:54:22.293536   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 08:54:22.301460   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 08:54:22.308894   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.308947   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:54:22.316083   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 08:54:22.323905   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.323959   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:54:22.331273   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 08:54:22.338736   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.338789   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:54:22.346320   53550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:54:22.354109   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:22.400461   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.430760   53550 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.030276983s)
	I1213 08:54:24.430822   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.648055   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.718708   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.760609   53550 api_server.go:52] waiting for apiserver process to appear ...
	I1213 08:54:24.760672   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:25.261709   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:25.761435   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:26.261759   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:26.761732   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:27.260880   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:27.760874   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:28.261721   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:28.761493   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:29.260874   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:29.761189   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:30.260883   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:30.761448   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:31.260872   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:31.761568   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:32.260967   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:32.760840   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:33.261542   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:33.761383   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:34.261771   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:34.761647   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:35.260857   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:35.760860   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:36.261572   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:36.761127   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:37.260746   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:37.760824   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:38.261446   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:38.760828   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:39.261574   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:39.760780   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:40.261697   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:40.760839   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:41.261384   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:41.761710   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:42.261116   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:42.761031   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:43.260886   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:43.760843   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:44.261147   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:44.761415   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:45.260979   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:45.761106   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:46.261523   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:46.760830   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:47.261517   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:47.760776   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:48.261547   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:48.761373   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:49.260826   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:49.761574   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:50.261136   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:50.761757   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:51.261197   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:51.761646   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:52.261300   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:52.761696   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:53.260864   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:53.761574   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:54.260768   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:54.761242   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:55.261350   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:55.761559   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:56.261198   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:56.761477   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:57.261567   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:57.760861   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:58.261803   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:58.760843   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:59.260868   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:59.761676   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:00.261517   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:00.761052   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:01.260802   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:01.760882   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:02.260924   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:02.760742   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:03.261542   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:03.761112   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:04.260813   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:04.761741   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:05.261225   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:05.760863   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:06.261426   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:06.761616   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:07.260888   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:07.760944   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:08.260874   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:08.761422   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:09.261767   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:09.761735   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:10.261376   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:10.760871   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:11.261761   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:11.761199   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:12.260928   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:12.761700   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:13.261570   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:13.761185   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:14.261662   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:14.760883   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:15.260866   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:15.761804   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:16.261789   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:16.761363   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:17.260776   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:17.761086   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:18.261288   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:18.760851   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:19.261191   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:19.761422   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:20.261577   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:20.761202   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:21.260768   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:21.761795   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:22.260945   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:22.761690   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:23.260860   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:23.761430   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:24.261657   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:24.761756   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:24.761828   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:24.786217   53550 cri.go:89] found id: ""
	I1213 08:55:24.786236   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.786243   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:24.786249   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:24.786328   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:24.809104   53550 cri.go:89] found id: ""
	I1213 08:55:24.809118   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.809125   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:24.809130   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:24.809187   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:24.832861   53550 cri.go:89] found id: ""
	I1213 08:55:24.832880   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.832887   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:24.832892   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:24.832949   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:24.856552   53550 cri.go:89] found id: ""
	I1213 08:55:24.856566   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.856573   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:24.856578   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:24.856634   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:24.879617   53550 cri.go:89] found id: ""
	I1213 08:55:24.879631   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.879638   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:24.879643   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:24.879700   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:24.905506   53550 cri.go:89] found id: ""
	I1213 08:55:24.905520   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.905526   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:24.905532   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:24.905588   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:24.930567   53550 cri.go:89] found id: ""
	I1213 08:55:24.930581   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.930587   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:24.930595   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:24.930605   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:24.961663   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:24.961679   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:25.017689   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:25.017709   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:25.035228   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:25.035257   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:25.112728   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:25.103316   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.103954   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.105905   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.106753   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.108540   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:25.103316   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.103954   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.105905   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.106753   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.108540   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:25.112738   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:25.112750   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:27.676671   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:27.686646   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:27.686705   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:27.710449   53550 cri.go:89] found id: ""
	I1213 08:55:27.710462   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.710469   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:27.710474   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:27.710531   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:27.734910   53550 cri.go:89] found id: ""
	I1213 08:55:27.734923   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.734943   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:27.734949   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:27.735007   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:27.762767   53550 cri.go:89] found id: ""
	I1213 08:55:27.762787   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.762794   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:27.762799   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:27.762853   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:27.789263   53550 cri.go:89] found id: ""
	I1213 08:55:27.789282   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.789288   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:27.789293   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:27.789352   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:27.817361   53550 cri.go:89] found id: ""
	I1213 08:55:27.817374   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.817381   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:27.817386   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:27.817444   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:27.841034   53550 cri.go:89] found id: ""
	I1213 08:55:27.841047   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.841054   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:27.841059   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:27.841114   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:27.865949   53550 cri.go:89] found id: ""
	I1213 08:55:27.865963   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.865970   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:27.865978   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:27.865988   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:27.921352   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:27.921372   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:27.934950   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:27.934966   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:28.012009   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:27.997838   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:27.998688   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.000757   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.005522   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.006421   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:27.997838   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:27.998688   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.000757   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.005522   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.006421   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:28.012023   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:28.012036   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:28.081214   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:28.081231   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:30.614736   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:30.624755   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:30.624816   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:30.650174   53550 cri.go:89] found id: ""
	I1213 08:55:30.650188   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.650195   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:30.650200   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:30.650257   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:30.675572   53550 cri.go:89] found id: ""
	I1213 08:55:30.675585   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.675592   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:30.675597   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:30.675661   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:30.700274   53550 cri.go:89] found id: ""
	I1213 08:55:30.700288   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.700295   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:30.700301   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:30.700357   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:30.724242   53550 cri.go:89] found id: ""
	I1213 08:55:30.724255   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.724262   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:30.724267   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:30.724322   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:30.749004   53550 cri.go:89] found id: ""
	I1213 08:55:30.749018   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.749025   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:30.749029   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:30.749091   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:30.772837   53550 cri.go:89] found id: ""
	I1213 08:55:30.772850   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.772857   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:30.772862   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:30.772917   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:30.796329   53550 cri.go:89] found id: ""
	I1213 08:55:30.796343   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.796350   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:30.796358   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:30.796369   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:30.806800   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:30.806816   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:30.869919   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:30.861579   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.861934   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.863644   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.864162   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.865728   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:30.861579   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.861934   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.863644   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.864162   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.865728   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:30.869929   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:30.869939   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:30.936472   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:30.936496   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:30.965152   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:30.965167   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:33.525938   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:33.536142   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:33.536204   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:33.560265   53550 cri.go:89] found id: ""
	I1213 08:55:33.560279   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.560286   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:33.560291   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:33.560346   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:33.584181   53550 cri.go:89] found id: ""
	I1213 08:55:33.584194   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.584201   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:33.584206   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:33.584261   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:33.612544   53550 cri.go:89] found id: ""
	I1213 08:55:33.612558   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.612566   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:33.612571   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:33.612628   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:33.636515   53550 cri.go:89] found id: ""
	I1213 08:55:33.636529   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.636536   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:33.636541   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:33.636601   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:33.661822   53550 cri.go:89] found id: ""
	I1213 08:55:33.661835   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.661842   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:33.661847   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:33.661909   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:33.694727   53550 cri.go:89] found id: ""
	I1213 08:55:33.694741   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.694748   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:33.694753   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:33.694812   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:33.721852   53550 cri.go:89] found id: ""
	I1213 08:55:33.721866   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.721873   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:33.721882   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:33.721892   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:33.789428   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:33.780794   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.781535   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783102   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783765   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.785318   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:33.780794   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.781535   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783102   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783765   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.785318   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:33.789438   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:33.789448   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:33.851847   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:33.851865   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:33.879583   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:33.879599   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:33.937089   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:33.937108   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:36.449743   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:36.459975   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:36.460040   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:36.485035   53550 cri.go:89] found id: ""
	I1213 08:55:36.485048   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.485055   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:36.485060   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:36.485116   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:36.509956   53550 cri.go:89] found id: ""
	I1213 08:55:36.509970   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.509977   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:36.509983   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:36.510040   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:36.535929   53550 cri.go:89] found id: ""
	I1213 08:55:36.535942   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.535949   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:36.535954   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:36.536014   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:36.560722   53550 cri.go:89] found id: ""
	I1213 08:55:36.560735   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.560742   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:36.560747   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:36.560818   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:36.586434   53550 cri.go:89] found id: ""
	I1213 08:55:36.586448   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.586455   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:36.586459   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:36.586517   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:36.615482   53550 cri.go:89] found id: ""
	I1213 08:55:36.615506   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.615531   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:36.615536   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:36.615611   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:36.642408   53550 cri.go:89] found id: ""
	I1213 08:55:36.642422   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.642439   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:36.642446   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:36.642457   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:36.669924   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:36.669946   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:36.728697   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:36.728717   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:36.740739   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:36.740759   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:36.807194   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:36.798750   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.799446   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.801401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.802010   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.803502   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:36.798750   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.799446   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.801401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.802010   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.803502   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:36.807204   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:36.807218   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:39.369875   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:39.380141   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:39.380202   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:39.407846   53550 cri.go:89] found id: ""
	I1213 08:55:39.407859   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.407867   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:39.407872   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:39.407929   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:39.432500   53550 cri.go:89] found id: ""
	I1213 08:55:39.432514   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.432520   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:39.432525   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:39.432584   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:39.457872   53550 cri.go:89] found id: ""
	I1213 08:55:39.457886   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.457893   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:39.457898   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:39.457961   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:39.483359   53550 cri.go:89] found id: ""
	I1213 08:55:39.483373   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.483379   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:39.483384   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:39.483458   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:39.508786   53550 cri.go:89] found id: ""
	I1213 08:55:39.508800   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.508807   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:39.508812   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:39.508879   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:39.533162   53550 cri.go:89] found id: ""
	I1213 08:55:39.533177   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.533184   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:39.533189   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:39.533247   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:39.558039   53550 cri.go:89] found id: ""
	I1213 08:55:39.558052   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.558059   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:39.558067   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:39.558076   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:39.618400   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:39.618423   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:39.629575   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:39.629592   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:39.694333   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:39.686653   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.687216   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.688669   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.689084   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.690548   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:39.686653   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.687216   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.688669   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.689084   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.690548   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:39.694344   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:39.694355   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:39.757320   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:39.757338   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:42.285019   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:42.297179   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:42.297241   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:42.328576   53550 cri.go:89] found id: ""
	I1213 08:55:42.328589   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.328611   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:42.328616   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:42.328678   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:42.356055   53550 cri.go:89] found id: ""
	I1213 08:55:42.356069   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.356077   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:42.356082   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:42.356141   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:42.380770   53550 cri.go:89] found id: ""
	I1213 08:55:42.380783   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.380790   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:42.380796   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:42.380866   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:42.409446   53550 cri.go:89] found id: ""
	I1213 08:55:42.409460   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.409466   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:42.409471   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:42.409530   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:42.433502   53550 cri.go:89] found id: ""
	I1213 08:55:42.433515   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.433522   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:42.433527   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:42.433583   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:42.458312   53550 cri.go:89] found id: ""
	I1213 08:55:42.458325   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.458336   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:42.458341   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:42.458401   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:42.482681   53550 cri.go:89] found id: ""
	I1213 08:55:42.482694   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.482702   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:42.482709   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:42.482719   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:42.544167   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:42.544185   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:42.572064   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:42.572079   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:42.629874   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:42.629892   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:42.641069   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:42.641084   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:42.704996   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:42.696662   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.697320   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.698873   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.699442   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.701188   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:42.696662   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.697320   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.698873   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.699442   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.701188   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:45.206980   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:45.225798   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:45.225900   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:45.260556   53550 cri.go:89] found id: ""
	I1213 08:55:45.260579   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.260586   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:45.260592   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:45.260660   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:45.300170   53550 cri.go:89] found id: ""
	I1213 08:55:45.300183   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.300190   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:45.300195   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:45.300253   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:45.335036   53550 cri.go:89] found id: ""
	I1213 08:55:45.335050   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.335057   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:45.335062   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:45.335123   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:45.366574   53550 cri.go:89] found id: ""
	I1213 08:55:45.366587   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.366594   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:45.366599   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:45.366659   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:45.391767   53550 cri.go:89] found id: ""
	I1213 08:55:45.391781   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.391788   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:45.391793   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:45.391850   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:45.416855   53550 cri.go:89] found id: ""
	I1213 08:55:45.416869   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.416876   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:45.416882   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:45.416941   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:45.441837   53550 cri.go:89] found id: ""
	I1213 08:55:45.441859   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.441867   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:45.441875   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:45.441885   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:45.499186   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:45.499203   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:45.510383   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:45.510401   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:45.577305   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:45.568662   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.569333   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.570928   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.571293   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.572858   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:45.568662   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.569333   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.570928   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.571293   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.572858   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:45.577329   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:45.577340   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:45.639739   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:45.639761   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:48.174772   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:48.185188   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:48.185250   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:48.210180   53550 cri.go:89] found id: ""
	I1213 08:55:48.210194   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.210200   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:48.210205   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:48.210268   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:48.235002   53550 cri.go:89] found id: ""
	I1213 08:55:48.235015   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.235022   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:48.235027   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:48.235085   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:48.259922   53550 cri.go:89] found id: ""
	I1213 08:55:48.259936   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.259943   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:48.259948   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:48.260007   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:48.304590   53550 cri.go:89] found id: ""
	I1213 08:55:48.304605   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.304611   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:48.304617   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:48.304675   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:48.342676   53550 cri.go:89] found id: ""
	I1213 08:55:48.342690   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.342697   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:48.342703   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:48.342759   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:48.366578   53550 cri.go:89] found id: ""
	I1213 08:55:48.366592   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.366599   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:48.366604   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:48.366673   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:48.391059   53550 cri.go:89] found id: ""
	I1213 08:55:48.391073   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.391080   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:48.391089   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:48.391099   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:48.462962   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:48.454137   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.455049   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.456663   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.457133   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.458628   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:48.454137   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.455049   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.456663   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.457133   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.458628   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:48.462973   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:48.462988   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:48.526213   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:48.526232   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:48.556890   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:48.556905   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:48.613408   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:48.613425   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:51.124505   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:51.134928   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:51.134985   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:51.159141   53550 cri.go:89] found id: ""
	I1213 08:55:51.159154   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.159161   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:51.159166   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:51.159222   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:51.182690   53550 cri.go:89] found id: ""
	I1213 08:55:51.182704   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.182711   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:51.182716   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:51.182773   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:51.208685   53550 cri.go:89] found id: ""
	I1213 08:55:51.208698   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.208705   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:51.208710   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:51.208766   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:51.233183   53550 cri.go:89] found id: ""
	I1213 08:55:51.233197   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.233204   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:51.233209   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:51.233270   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:51.258042   53550 cri.go:89] found id: ""
	I1213 08:55:51.258069   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.258076   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:51.258081   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:51.258147   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:51.290467   53550 cri.go:89] found id: ""
	I1213 08:55:51.290481   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.290488   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:51.290495   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:51.290566   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:51.323202   53550 cri.go:89] found id: ""
	I1213 08:55:51.323216   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.323223   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:51.323231   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:51.323240   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:51.394188   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:51.394206   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:51.426214   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:51.426230   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:51.485838   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:51.485855   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:51.496565   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:51.496580   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:51.576933   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:51.567900   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.568559   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570248   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570845   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.572406   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:51.567900   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.568559   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570248   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570845   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.572406   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:54.077204   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:54.087800   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:54.087874   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:54.113040   53550 cri.go:89] found id: ""
	I1213 08:55:54.113055   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.113062   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:54.113067   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:54.113124   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:54.138822   53550 cri.go:89] found id: ""
	I1213 08:55:54.138835   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.138842   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:54.138847   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:54.138906   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:54.163439   53550 cri.go:89] found id: ""
	I1213 08:55:54.163452   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.163459   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:54.163465   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:54.163557   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:54.188125   53550 cri.go:89] found id: ""
	I1213 08:55:54.188138   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.188145   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:54.188152   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:54.188208   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:54.212893   53550 cri.go:89] found id: ""
	I1213 08:55:54.212907   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.212914   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:54.212920   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:54.212981   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:54.237373   53550 cri.go:89] found id: ""
	I1213 08:55:54.237386   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.237393   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:54.237399   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:54.237459   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:54.265504   53550 cri.go:89] found id: ""
	I1213 08:55:54.265518   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.265525   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:54.265532   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:54.265542   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:54.333125   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:54.333143   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:54.347402   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:54.347418   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:54.412166   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:54.403644   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.404415   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406100   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406397   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.408104   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:54.403644   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.404415   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406100   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406397   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.408104   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:54.412175   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:54.412187   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:54.480709   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:54.480730   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:57.010334   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:57.021059   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:57.021120   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:57.047281   53550 cri.go:89] found id: ""
	I1213 08:55:57.047294   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.047301   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:57.047306   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:57.047377   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:57.071416   53550 cri.go:89] found id: ""
	I1213 08:55:57.071429   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.071436   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:57.071441   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:57.071498   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:57.101079   53550 cri.go:89] found id: ""
	I1213 08:55:57.101092   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.101104   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:57.101110   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:57.101166   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:57.125577   53550 cri.go:89] found id: ""
	I1213 08:55:57.125591   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.125598   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:57.125603   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:57.125664   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:57.150869   53550 cri.go:89] found id: ""
	I1213 08:55:57.150883   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.150890   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:57.150895   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:57.150952   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:57.175181   53550 cri.go:89] found id: ""
	I1213 08:55:57.175196   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.175203   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:57.175209   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:57.175265   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:57.201951   53550 cri.go:89] found id: ""
	I1213 08:55:57.201964   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.201981   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:57.201989   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:57.202000   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:57.230175   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:57.230191   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:57.289371   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:57.289389   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:57.301801   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:57.301816   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:57.376259   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:57.367385   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.368081   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.369821   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.370486   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.372258   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:57.367385   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.368081   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.369821   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.370486   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.372258   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:57.376279   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:57.376290   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:59.938203   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:59.948941   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:59.949015   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:59.975053   53550 cri.go:89] found id: ""
	I1213 08:55:59.975067   53550 logs.go:282] 0 containers: []
	W1213 08:55:59.975074   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:59.975079   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:59.975140   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:00.036168   53550 cri.go:89] found id: ""
	I1213 08:56:00.036184   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.036198   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:00.036204   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:00.036272   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:00.212433   53550 cri.go:89] found id: ""
	I1213 08:56:00.212448   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.212457   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:00.212463   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:00.212534   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:00.329892   53550 cri.go:89] found id: ""
	I1213 08:56:00.329922   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.329931   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:00.329937   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:00.330147   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:00.418357   53550 cri.go:89] found id: ""
	I1213 08:56:00.418382   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.418390   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:00.418395   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:00.418485   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:00.472022   53550 cri.go:89] found id: ""
	I1213 08:56:00.472038   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.472057   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:00.472063   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:00.472147   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:00.501778   53550 cri.go:89] found id: ""
	I1213 08:56:00.501793   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.501800   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:00.501809   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:00.501821   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:00.514889   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:00.514908   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:00.586263   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:00.576477   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.577506   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.579365   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.580284   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.582096   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:00.576477   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.577506   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.579365   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.580284   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.582096   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:00.586275   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:00.586286   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:00.651709   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:00.651729   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:00.679944   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:00.679961   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:03.240030   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:03.250487   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:03.250564   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:03.276985   53550 cri.go:89] found id: ""
	I1213 08:56:03.276999   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.277006   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:03.277011   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:03.277079   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:03.305874   53550 cri.go:89] found id: ""
	I1213 08:56:03.305887   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.305894   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:03.305900   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:03.305961   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:03.332792   53550 cri.go:89] found id: ""
	I1213 08:56:03.332805   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.332812   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:03.332817   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:03.332875   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:03.359327   53550 cri.go:89] found id: ""
	I1213 08:56:03.359340   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.359347   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:03.359352   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:03.359414   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:03.383789   53550 cri.go:89] found id: ""
	I1213 08:56:03.383802   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.383818   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:03.383823   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:03.383881   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:03.409294   53550 cri.go:89] found id: ""
	I1213 08:56:03.409308   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.409315   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:03.409320   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:03.409380   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:03.433579   53550 cri.go:89] found id: ""
	I1213 08:56:03.433593   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.433600   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:03.433608   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:03.433620   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:03.444272   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:03.444288   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:03.513583   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:03.505953   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.506372   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507548   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507861   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.509300   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:03.505953   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.506372   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507548   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507861   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.509300   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:03.513594   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:03.513605   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:03.576629   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:03.576649   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:03.608162   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:03.608178   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:06.165156   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:06.175029   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:06.175086   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:06.199548   53550 cri.go:89] found id: ""
	I1213 08:56:06.199561   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.199567   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:06.199573   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:06.199630   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:06.223345   53550 cri.go:89] found id: ""
	I1213 08:56:06.223358   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.223365   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:06.223370   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:06.223427   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:06.253772   53550 cri.go:89] found id: ""
	I1213 08:56:06.253785   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.253792   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:06.253797   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:06.253862   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:06.285197   53550 cri.go:89] found id: ""
	I1213 08:56:06.285209   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.285216   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:06.285221   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:06.285287   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:06.311117   53550 cri.go:89] found id: ""
	I1213 08:56:06.311130   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.311137   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:06.311142   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:06.311199   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:06.347101   53550 cri.go:89] found id: ""
	I1213 08:56:06.347115   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.347121   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:06.347134   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:06.347212   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:06.373093   53550 cri.go:89] found id: ""
	I1213 08:56:06.373106   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.373113   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:06.373121   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:06.373131   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:06.432261   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:06.432286   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:06.443840   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:06.443858   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:06.510711   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:06.501971   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.502684   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.504393   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.505195   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.506872   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:06.501971   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.502684   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.504393   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.505195   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.506872   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:06.510722   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:06.510745   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:06.572342   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:06.572360   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:09.099708   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:09.109781   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:09.109837   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:09.134708   53550 cri.go:89] found id: ""
	I1213 08:56:09.134722   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.134729   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:09.134734   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:09.134793   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:09.159277   53550 cri.go:89] found id: ""
	I1213 08:56:09.159291   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.159297   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:09.159302   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:09.159367   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:09.185743   53550 cri.go:89] found id: ""
	I1213 08:56:09.185756   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.185763   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:09.185768   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:09.185827   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:09.209881   53550 cri.go:89] found id: ""
	I1213 08:56:09.209894   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.209901   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:09.209907   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:09.209963   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:09.233078   53550 cri.go:89] found id: ""
	I1213 08:56:09.233091   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.233099   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:09.233104   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:09.233165   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:09.261187   53550 cri.go:89] found id: ""
	I1213 08:56:09.261200   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.261208   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:09.261216   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:09.261274   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:09.303988   53550 cri.go:89] found id: ""
	I1213 08:56:09.304001   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.304008   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:09.304016   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:09.304035   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:09.366963   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:09.366982   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:09.377754   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:09.377770   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:09.445863   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:09.437325   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.437871   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.439698   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.440090   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.442054   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:09.437325   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.437871   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.439698   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.440090   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.442054   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:09.445873   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:09.445884   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:09.507900   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:09.507918   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:12.036492   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:12.046919   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:12.046978   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:12.071196   53550 cri.go:89] found id: ""
	I1213 08:56:12.071211   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.071218   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:12.071223   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:12.071285   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:12.097508   53550 cri.go:89] found id: ""
	I1213 08:56:12.097522   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.097529   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:12.097534   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:12.097591   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:12.122628   53550 cri.go:89] found id: ""
	I1213 08:56:12.122641   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.122649   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:12.122654   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:12.122714   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:12.147292   53550 cri.go:89] found id: ""
	I1213 08:56:12.147306   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.147313   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:12.147318   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:12.147385   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:12.171601   53550 cri.go:89] found id: ""
	I1213 08:56:12.171615   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.171622   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:12.171629   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:12.171685   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:12.195241   53550 cri.go:89] found id: ""
	I1213 08:56:12.195255   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.195272   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:12.195277   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:12.195332   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:12.220835   53550 cri.go:89] found id: ""
	I1213 08:56:12.220849   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.220866   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:12.220874   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:12.220883   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:12.283214   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:12.283232   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:12.322176   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:12.322192   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:12.382990   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:12.383007   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:12.393976   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:12.393993   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:12.454561   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:12.446899   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.447411   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.448884   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.449202   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.450694   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:12.446899   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.447411   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.448884   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.449202   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.450694   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:14.956323   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:14.966379   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:14.966439   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:14.992786   53550 cri.go:89] found id: ""
	I1213 08:56:14.992801   53550 logs.go:282] 0 containers: []
	W1213 08:56:14.992807   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:14.992813   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:14.992876   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:15.028638   53550 cri.go:89] found id: ""
	I1213 08:56:15.028653   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.028660   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:15.028666   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:15.028735   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:15.059274   53550 cri.go:89] found id: ""
	I1213 08:56:15.059288   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.059295   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:15.059301   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:15.059408   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:15.089311   53550 cri.go:89] found id: ""
	I1213 08:56:15.089324   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.089331   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:15.089336   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:15.089401   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:15.118691   53550 cri.go:89] found id: ""
	I1213 08:56:15.118705   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.118712   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:15.118717   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:15.118773   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:15.144494   53550 cri.go:89] found id: ""
	I1213 08:56:15.144507   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.144514   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:15.144519   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:15.144577   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:15.173885   53550 cri.go:89] found id: ""
	I1213 08:56:15.173899   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.173905   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:15.173914   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:15.173925   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:15.236112   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:15.228066   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.228792   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230471   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230772   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.232236   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:15.228066   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.228792   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230471   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230772   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.232236   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:15.236121   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:15.236134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:15.298113   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:15.298131   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:15.342964   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:15.342980   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:15.400545   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:15.400563   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:17.911444   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:17.921343   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:17.921402   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:17.947826   53550 cri.go:89] found id: ""
	I1213 08:56:17.947840   53550 logs.go:282] 0 containers: []
	W1213 08:56:17.947847   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:17.947852   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:17.947908   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:17.971346   53550 cri.go:89] found id: ""
	I1213 08:56:17.971376   53550 logs.go:282] 0 containers: []
	W1213 08:56:17.971383   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:17.971387   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:17.971449   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:17.999271   53550 cri.go:89] found id: ""
	I1213 08:56:17.999285   53550 logs.go:282] 0 containers: []
	W1213 08:56:17.999292   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:17.999298   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:17.999371   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:18.031971   53550 cri.go:89] found id: ""
	I1213 08:56:18.031984   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.031991   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:18.031996   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:18.032058   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:18.057098   53550 cri.go:89] found id: ""
	I1213 08:56:18.057112   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.057119   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:18.057127   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:18.057187   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:18.081981   53550 cri.go:89] found id: ""
	I1213 08:56:18.082007   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.082014   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:18.082021   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:18.082092   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:18.108138   53550 cri.go:89] found id: ""
	I1213 08:56:18.108152   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.108159   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:18.108166   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:18.108179   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:18.118705   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:18.118723   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:18.182232   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:18.173836   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.174533   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176073   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176393   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.177924   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:18.173836   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.174533   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176073   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176393   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.177924   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:18.182242   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:18.182253   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:18.243585   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:18.243606   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:18.292655   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:18.292671   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:20.860353   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:20.870680   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:20.870753   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:20.895485   53550 cri.go:89] found id: ""
	I1213 08:56:20.895499   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.895506   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:20.895532   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:20.895592   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:20.921461   53550 cri.go:89] found id: ""
	I1213 08:56:20.921475   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.921482   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:20.921486   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:20.921545   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:20.946484   53550 cri.go:89] found id: ""
	I1213 08:56:20.946498   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.946507   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:20.946512   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:20.946570   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:20.971723   53550 cri.go:89] found id: ""
	I1213 08:56:20.971737   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.971744   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:20.971749   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:20.971806   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:20.996903   53550 cri.go:89] found id: ""
	I1213 08:56:20.996917   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.996924   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:20.996929   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:20.996987   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:21.025270   53550 cri.go:89] found id: ""
	I1213 08:56:21.025283   53550 logs.go:282] 0 containers: []
	W1213 08:56:21.025290   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:21.025295   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:21.025354   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:21.050984   53550 cri.go:89] found id: ""
	I1213 08:56:21.050998   53550 logs.go:282] 0 containers: []
	W1213 08:56:21.051005   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:21.051013   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:21.051024   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:21.061853   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:21.061867   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:21.130720   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:21.122077   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.122912   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124411   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124796   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.126237   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:21.122077   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.122912   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124411   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124796   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.126237   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:21.130741   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:21.130753   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:21.194629   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:21.194647   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:21.222790   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:21.222806   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:23.780448   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:23.790523   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:23.790584   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:23.815703   53550 cri.go:89] found id: ""
	I1213 08:56:23.815717   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.815724   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:23.815729   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:23.815790   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:23.844047   53550 cri.go:89] found id: ""
	I1213 08:56:23.844062   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.844069   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:23.844074   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:23.844132   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:23.868824   53550 cri.go:89] found id: ""
	I1213 08:56:23.868837   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.868844   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:23.868849   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:23.868908   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:23.893054   53550 cri.go:89] found id: ""
	I1213 08:56:23.893067   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.893084   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:23.893089   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:23.893158   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:23.918102   53550 cri.go:89] found id: ""
	I1213 08:56:23.918115   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.918141   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:23.918146   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:23.918221   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:23.943674   53550 cri.go:89] found id: ""
	I1213 08:56:23.943706   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.943713   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:23.943719   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:23.943780   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:23.969229   53550 cri.go:89] found id: ""
	I1213 08:56:23.969242   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.969250   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:23.969258   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:23.969268   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:24.024433   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:24.024452   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:24.036371   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:24.036394   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:24.106333   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:24.097769   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.098607   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.100363   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.101001   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.102517   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:24.097769   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.098607   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.100363   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.101001   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.102517   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:24.106343   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:24.106354   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:24.169184   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:24.169204   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:26.698614   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:26.708577   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:26.708633   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:26.732922   53550 cri.go:89] found id: ""
	I1213 08:56:26.732936   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.732943   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:26.732948   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:26.733006   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:26.755987   53550 cri.go:89] found id: ""
	I1213 08:56:26.756000   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.756007   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:26.756012   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:26.756070   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:26.780069   53550 cri.go:89] found id: ""
	I1213 08:56:26.780082   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.780089   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:26.780094   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:26.780152   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:26.803904   53550 cri.go:89] found id: ""
	I1213 08:56:26.803916   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.803923   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:26.803928   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:26.803983   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:26.829092   53550 cri.go:89] found id: ""
	I1213 08:56:26.829106   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.829114   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:26.829119   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:26.829177   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:26.853845   53550 cri.go:89] found id: ""
	I1213 08:56:26.853858   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.853865   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:26.853870   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:26.853925   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:26.878415   53550 cri.go:89] found id: ""
	I1213 08:56:26.878428   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.878435   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:26.878443   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:26.878452   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:26.934265   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:26.934282   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:26.945523   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:26.945543   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:27.018637   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:27.009719   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.010496   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012205   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012866   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.014551   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:27.009719   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.010496   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012205   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012866   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.014551   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:27.018647   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:27.018658   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:27.084954   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:27.084972   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:29.613085   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:29.622947   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:29.623004   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:29.646959   53550 cri.go:89] found id: ""
	I1213 08:56:29.646973   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.646980   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:29.646986   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:29.647044   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:29.671745   53550 cri.go:89] found id: ""
	I1213 08:56:29.671759   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.671766   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:29.671771   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:29.671827   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:29.695958   53550 cri.go:89] found id: ""
	I1213 08:56:29.695972   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.695979   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:29.695984   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:29.696042   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:29.720480   53550 cri.go:89] found id: ""
	I1213 08:56:29.720494   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.720501   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:29.720506   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:29.720561   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:29.744988   53550 cri.go:89] found id: ""
	I1213 08:56:29.745001   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.745008   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:29.745013   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:29.745069   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:29.768515   53550 cri.go:89] found id: ""
	I1213 08:56:29.768529   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.768536   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:29.768541   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:29.768600   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:29.792772   53550 cri.go:89] found id: ""
	I1213 08:56:29.792791   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.792798   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:29.792806   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:29.792815   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:29.848125   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:29.848143   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:29.859353   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:29.859369   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:29.922416   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:29.914324   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.914996   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.915957   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.916631   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.918102   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:29.914324   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.914996   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.915957   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.916631   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.918102   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:29.922426   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:29.922438   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:29.991606   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:29.991633   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:32.539218   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:32.551358   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:32.551433   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:32.579755   53550 cri.go:89] found id: ""
	I1213 08:56:32.579769   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.579776   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:32.579782   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:32.579840   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:32.606298   53550 cri.go:89] found id: ""
	I1213 08:56:32.606312   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.606319   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:32.606325   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:32.606386   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:32.631992   53550 cri.go:89] found id: ""
	I1213 08:56:32.632006   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.632023   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:32.632028   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:32.632086   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:32.663992   53550 cri.go:89] found id: ""
	I1213 08:56:32.664006   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.664013   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:32.664019   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:32.664079   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:32.688738   53550 cri.go:89] found id: ""
	I1213 08:56:32.688752   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.688759   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:32.688764   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:32.688824   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:32.714559   53550 cri.go:89] found id: ""
	I1213 08:56:32.714573   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.714590   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:32.714596   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:32.714663   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:32.741559   53550 cri.go:89] found id: ""
	I1213 08:56:32.741573   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.741579   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:32.741587   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:32.741597   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:32.800820   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:32.800838   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:32.811825   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:32.811840   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:32.885502   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:32.876782   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.877261   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.879781   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.880103   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.881563   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:32.876782   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.877261   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.879781   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.880103   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.881563   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:32.885513   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:32.885525   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:32.948272   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:32.948291   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:35.480322   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:35.490281   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:35.490342   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:35.514866   53550 cri.go:89] found id: ""
	I1213 08:56:35.514880   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.514891   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:35.514896   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:35.514956   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:35.547423   53550 cri.go:89] found id: ""
	I1213 08:56:35.547436   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.547443   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:35.547449   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:35.547529   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:35.576485   53550 cri.go:89] found id: ""
	I1213 08:56:35.576499   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.576506   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:35.576511   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:35.576569   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:35.602583   53550 cri.go:89] found id: ""
	I1213 08:56:35.602597   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.602604   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:35.602610   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:35.602671   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:35.628894   53550 cri.go:89] found id: ""
	I1213 08:56:35.628908   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.628915   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:35.628920   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:35.628983   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:35.657754   53550 cri.go:89] found id: ""
	I1213 08:56:35.657768   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.657775   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:35.657780   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:35.657838   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:35.682178   53550 cri.go:89] found id: ""
	I1213 08:56:35.682192   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.682198   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:35.682207   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:35.682218   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:35.692814   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:35.692830   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:35.755108   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:35.747202   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.747982   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749544   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749858   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.751344   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:35.747202   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.747982   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749544   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749858   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.751344   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:35.755119   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:35.755130   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:35.819728   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:35.819749   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:35.848015   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:35.848031   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:38.404654   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:38.414683   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:38.414742   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:38.441126   53550 cri.go:89] found id: ""
	I1213 08:56:38.441140   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.441147   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:38.441152   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:38.441214   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:38.465511   53550 cri.go:89] found id: ""
	I1213 08:56:38.465524   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.465545   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:38.465550   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:38.465606   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:38.489339   53550 cri.go:89] found id: ""
	I1213 08:56:38.489353   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.489359   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:38.489364   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:38.489418   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:38.513685   53550 cri.go:89] found id: ""
	I1213 08:56:38.513699   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.513706   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:38.513711   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:38.513768   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:38.542115   53550 cri.go:89] found id: ""
	I1213 08:56:38.542128   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.542135   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:38.542140   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:38.542204   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:38.569759   53550 cri.go:89] found id: ""
	I1213 08:56:38.569772   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.569778   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:38.569784   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:38.569842   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:38.596740   53550 cri.go:89] found id: ""
	I1213 08:56:38.596754   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.596761   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:38.596769   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:38.596780   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:38.654316   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:38.654335   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:38.665035   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:38.665050   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:38.729308   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:38.720643   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.721501   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723232   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723577   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.725294   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:38.720643   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.721501   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723232   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723577   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.725294   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:38.729317   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:38.729330   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:38.790889   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:38.790908   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:41.323859   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:41.335168   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:41.335228   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:41.361007   53550 cri.go:89] found id: ""
	I1213 08:56:41.361021   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.361028   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:41.361033   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:41.361090   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:41.385773   53550 cri.go:89] found id: ""
	I1213 08:56:41.385787   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.385794   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:41.385799   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:41.385857   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:41.415146   53550 cri.go:89] found id: ""
	I1213 08:56:41.415160   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.415174   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:41.415179   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:41.415235   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:41.441108   53550 cri.go:89] found id: ""
	I1213 08:56:41.441122   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.441129   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:41.441134   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:41.441190   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:41.475987   53550 cri.go:89] found id: ""
	I1213 08:56:41.476001   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.476008   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:41.476014   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:41.476073   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:41.499775   53550 cri.go:89] found id: ""
	I1213 08:56:41.499789   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.499796   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:41.499801   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:41.499861   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:41.528901   53550 cri.go:89] found id: ""
	I1213 08:56:41.528914   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.528931   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:41.528939   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:41.528956   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:41.589661   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:41.589678   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:41.602123   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:41.602138   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:41.667706   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:41.659176   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.659929   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.661608   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.662073   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.663743   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:41.659176   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.659929   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.661608   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.662073   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.663743   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:41.667715   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:41.667735   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:41.730253   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:41.730270   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:44.257671   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:44.269222   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:44.269293   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:44.294398   53550 cri.go:89] found id: ""
	I1213 08:56:44.294412   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.294419   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:44.294423   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:44.294484   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:44.319070   53550 cri.go:89] found id: ""
	I1213 08:56:44.319084   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.319092   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:44.319097   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:44.319155   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:44.343392   53550 cri.go:89] found id: ""
	I1213 08:56:44.343405   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.343420   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:44.343425   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:44.343485   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:44.367894   53550 cri.go:89] found id: ""
	I1213 08:56:44.367909   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.367924   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:44.367929   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:44.367993   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:44.393473   53550 cri.go:89] found id: ""
	I1213 08:56:44.393487   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.393505   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:44.393511   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:44.393579   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:44.419150   53550 cri.go:89] found id: ""
	I1213 08:56:44.419164   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.419171   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:44.419177   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:44.419236   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:44.445826   53550 cri.go:89] found id: ""
	I1213 08:56:44.445839   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.445846   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:44.445854   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:44.445864   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:44.473670   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:44.473686   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:44.532419   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:44.532439   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:44.545059   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:44.545075   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:44.621942   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:44.613047   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.613768   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.615594   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.616292   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.618008   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:44.613047   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.613768   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.615594   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.616292   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.618008   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:44.621960   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:44.621970   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:47.187660   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:47.197939   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:47.197999   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:47.223309   53550 cri.go:89] found id: ""
	I1213 08:56:47.223328   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.223335   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:47.223341   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:47.223404   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:47.248945   53550 cri.go:89] found id: ""
	I1213 08:56:47.248958   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.248965   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:47.248971   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:47.249030   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:47.277058   53550 cri.go:89] found id: ""
	I1213 08:56:47.277072   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.277079   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:47.277084   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:47.277141   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:47.301116   53550 cri.go:89] found id: ""
	I1213 08:56:47.301130   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.301137   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:47.301151   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:47.301209   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:47.323965   53550 cri.go:89] found id: ""
	I1213 08:56:47.323979   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.323987   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:47.323992   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:47.324050   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:47.348999   53550 cri.go:89] found id: ""
	I1213 08:56:47.349019   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.349027   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:47.349032   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:47.349092   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:47.373783   53550 cri.go:89] found id: ""
	I1213 08:56:47.373797   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.373803   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:47.373811   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:47.373820   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:47.429021   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:47.429039   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:47.439785   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:47.439801   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:47.500829   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:47.492613   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.493257   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.494833   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.495318   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.496858   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:47.492613   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.493257   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.494833   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.495318   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.496858   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:47.500840   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:47.500850   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:47.568111   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:47.568130   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:50.110119   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:50.120537   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:50.120602   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:50.148966   53550 cri.go:89] found id: ""
	I1213 08:56:50.148980   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.148986   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:50.148991   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:50.149046   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:50.177907   53550 cri.go:89] found id: ""
	I1213 08:56:50.177921   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.177928   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:50.177933   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:50.177996   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:50.203131   53550 cri.go:89] found id: ""
	I1213 08:56:50.203144   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.203151   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:50.203155   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:50.203262   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:50.226237   53550 cri.go:89] found id: ""
	I1213 08:56:50.226257   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.226264   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:50.226269   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:50.226327   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:50.253758   53550 cri.go:89] found id: ""
	I1213 08:56:50.253773   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.253779   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:50.253784   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:50.253843   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:50.278302   53550 cri.go:89] found id: ""
	I1213 08:56:50.278315   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.278322   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:50.278327   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:50.278392   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:50.309556   53550 cri.go:89] found id: ""
	I1213 08:56:50.309569   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.309576   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:50.309584   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:50.309594   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:50.320066   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:50.320081   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:50.382949   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:50.374268   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.375072   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.376878   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.377528   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.379007   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:50.374268   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.375072   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.376878   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.377528   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.379007   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:50.382958   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:50.382969   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:50.444351   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:50.444370   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:50.470781   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:50.470797   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:53.028628   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:53.039130   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:53.039200   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:53.063996   53550 cri.go:89] found id: ""
	I1213 08:56:53.064009   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.064015   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:53.064020   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:53.064076   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:53.088275   53550 cri.go:89] found id: ""
	I1213 08:56:53.088289   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.088296   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:53.088300   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:53.088358   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:53.111773   53550 cri.go:89] found id: ""
	I1213 08:56:53.111786   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.111793   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:53.111808   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:53.111887   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:53.137026   53550 cri.go:89] found id: ""
	I1213 08:56:53.137040   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.137046   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:53.137051   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:53.137107   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:53.160335   53550 cri.go:89] found id: ""
	I1213 08:56:53.160349   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.160356   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:53.160361   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:53.160416   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:53.184713   53550 cri.go:89] found id: ""
	I1213 08:56:53.184726   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.184733   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:53.184738   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:53.184795   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:53.208847   53550 cri.go:89] found id: ""
	I1213 08:56:53.208861   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.208868   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:53.208875   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:53.208886   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:53.266985   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:53.267004   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:53.277388   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:53.277404   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:53.340191   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:53.332012   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.332685   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334293   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334789   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.336280   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:53.332012   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.332685   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334293   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334789   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.336280   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:53.340200   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:53.340211   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:53.401706   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:53.401724   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:55.928555   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:55.939550   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:55.939616   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:55.965405   53550 cri.go:89] found id: ""
	I1213 08:56:55.965419   53550 logs.go:282] 0 containers: []
	W1213 08:56:55.965426   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:55.965431   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:55.965498   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:55.992150   53550 cri.go:89] found id: ""
	I1213 08:56:55.992164   53550 logs.go:282] 0 containers: []
	W1213 08:56:55.992171   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:55.992175   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:55.992230   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:56.016602   53550 cri.go:89] found id: ""
	I1213 08:56:56.016616   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.016623   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:56.016628   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:56.016689   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:56.042580   53550 cri.go:89] found id: ""
	I1213 08:56:56.042593   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.042600   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:56.042605   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:56.042662   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:56.068761   53550 cri.go:89] found id: ""
	I1213 08:56:56.068775   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.068782   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:56.068787   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:56.068848   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:56.093033   53550 cri.go:89] found id: ""
	I1213 08:56:56.093048   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.093055   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:56.093061   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:56.093126   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:56.117228   53550 cri.go:89] found id: ""
	I1213 08:56:56.117241   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.117248   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:56.117255   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:56.117266   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:56.176992   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:56.177011   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:56.188270   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:56.188285   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:56.253019   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:56.245144   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.245941   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247417   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247852   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.249325   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:56.245144   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.245941   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247417   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247852   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.249325   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:56.253029   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:56.253039   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:56.317674   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:56.317696   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:58.848619   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:58.859053   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:58.859112   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:58.885409   53550 cri.go:89] found id: ""
	I1213 08:56:58.885423   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.885430   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:58.885436   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:58.885494   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:58.910222   53550 cri.go:89] found id: ""
	I1213 08:56:58.910236   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.910243   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:58.910249   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:58.910325   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:58.934888   53550 cri.go:89] found id: ""
	I1213 08:56:58.934902   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.934909   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:58.934914   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:58.934973   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:58.959400   53550 cri.go:89] found id: ""
	I1213 08:56:58.959413   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.959420   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:58.959426   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:58.959487   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:58.983607   53550 cri.go:89] found id: ""
	I1213 08:56:58.983621   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.983627   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:58.983651   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:58.983710   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:59.013864   53550 cri.go:89] found id: ""
	I1213 08:56:59.013879   53550 logs.go:282] 0 containers: []
	W1213 08:56:59.013886   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:59.013892   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:59.013953   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:59.039411   53550 cri.go:89] found id: ""
	I1213 08:56:59.039425   53550 logs.go:282] 0 containers: []
	W1213 08:56:59.039432   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:59.039475   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:59.039485   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:59.096733   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:59.096753   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:59.107622   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:59.107636   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:59.174925   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:59.166857   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.167747   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169305   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169619   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.171088   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:59.166857   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.167747   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169305   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169619   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.171088   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:59.174934   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:59.174947   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:59.241043   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:59.241063   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:01.772758   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:01.783635   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:01.783701   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:01.809991   53550 cri.go:89] found id: ""
	I1213 08:57:01.810006   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.810012   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:01.810017   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:01.810077   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:01.839186   53550 cri.go:89] found id: ""
	I1213 08:57:01.839200   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.839207   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:01.839212   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:01.839280   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:01.863706   53550 cri.go:89] found id: ""
	I1213 08:57:01.863720   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.863727   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:01.863733   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:01.863802   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:01.888840   53550 cri.go:89] found id: ""
	I1213 08:57:01.888853   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.888866   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:01.888871   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:01.888931   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:01.915920   53550 cri.go:89] found id: ""
	I1213 08:57:01.915933   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.915940   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:01.915944   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:01.916002   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:01.945751   53550 cri.go:89] found id: ""
	I1213 08:57:01.945765   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.945771   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:01.945776   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:01.945845   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:01.970743   53550 cri.go:89] found id: ""
	I1213 08:57:01.970757   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.970765   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:01.970773   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:01.970782   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:02.026866   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:02.026889   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:02.038522   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:02.038539   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:02.102348   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:02.094261   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.095133   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.096614   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.097026   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.098541   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:02.094261   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.095133   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.096614   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.097026   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.098541   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:02.102361   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:02.102375   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:02.169043   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:02.169063   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:04.696543   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:04.706341   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:04.706437   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:04.731229   53550 cri.go:89] found id: ""
	I1213 08:57:04.731243   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.731250   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:04.731255   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:04.731313   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:04.755649   53550 cri.go:89] found id: ""
	I1213 08:57:04.755664   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.755671   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:04.755675   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:04.755731   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:04.792911   53550 cri.go:89] found id: ""
	I1213 08:57:04.792925   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.792932   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:04.792937   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:04.793004   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:04.819883   53550 cri.go:89] found id: ""
	I1213 08:57:04.819898   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.819905   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:04.819910   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:04.819977   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:04.849837   53550 cri.go:89] found id: ""
	I1213 08:57:04.849851   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.849858   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:04.849863   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:04.849918   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:04.874858   53550 cri.go:89] found id: ""
	I1213 08:57:04.874882   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.874890   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:04.874895   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:04.874960   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:04.903606   53550 cri.go:89] found id: ""
	I1213 08:57:04.903627   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.903634   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:04.903643   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:04.903654   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:04.974645   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:04.965728   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.966550   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968066   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968742   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.970242   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:04.965728   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.966550   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968066   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968742   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.970242   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:04.974655   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:04.974665   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:05.042463   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:05.042483   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:05.073448   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:05.073463   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:05.138728   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:05.138751   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:07.650339   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:07.660396   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:07.660456   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:07.683873   53550 cri.go:89] found id: ""
	I1213 08:57:07.683886   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.683893   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:07.683898   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:07.683955   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:07.708331   53550 cri.go:89] found id: ""
	I1213 08:57:07.708345   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.708352   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:07.708357   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:07.708413   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:07.732899   53550 cri.go:89] found id: ""
	I1213 08:57:07.732913   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.732920   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:07.732925   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:07.732984   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:07.757287   53550 cri.go:89] found id: ""
	I1213 08:57:07.757301   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.757308   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:07.757313   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:07.757384   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:07.795374   53550 cri.go:89] found id: ""
	I1213 08:57:07.795387   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.795394   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:07.795399   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:07.795464   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:07.825153   53550 cri.go:89] found id: ""
	I1213 08:57:07.825167   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.825173   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:07.825182   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:07.825237   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:07.852307   53550 cri.go:89] found id: ""
	I1213 08:57:07.852321   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.852327   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:07.852336   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:07.852345   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:07.880059   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:07.880077   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:07.939241   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:07.939258   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:07.949880   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:07.949895   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:08.020565   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:08.011681   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.012563   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014158   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014873   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.016465   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:08.011681   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.012563   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014158   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014873   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.016465   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:08.020576   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:08.020587   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:10.587648   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:10.597489   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:10.597549   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:10.628550   53550 cri.go:89] found id: ""
	I1213 08:57:10.628564   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.628571   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:10.628579   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:10.628636   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:10.652715   53550 cri.go:89] found id: ""
	I1213 08:57:10.652728   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.652735   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:10.652740   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:10.652800   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:10.676571   53550 cri.go:89] found id: ""
	I1213 08:57:10.676585   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.676591   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:10.676596   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:10.676656   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:10.701425   53550 cri.go:89] found id: ""
	I1213 08:57:10.701439   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.701446   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:10.701451   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:10.701512   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:10.725031   53550 cri.go:89] found id: ""
	I1213 08:57:10.725044   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.725051   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:10.725056   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:10.725115   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:10.748783   53550 cri.go:89] found id: ""
	I1213 08:57:10.748796   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.748803   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:10.748808   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:10.748865   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:10.782351   53550 cri.go:89] found id: ""
	I1213 08:57:10.782364   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.782371   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:10.782379   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:10.782389   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:10.795735   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:10.795751   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:10.871365   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:10.863538   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.863981   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865416   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865845   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.867361   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:10.863538   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.863981   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865416   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865845   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.867361   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:10.871375   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:10.871386   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:10.934169   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:10.934186   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:10.960579   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:10.960595   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:13.522265   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:13.532592   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:13.532651   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:13.557594   53550 cri.go:89] found id: ""
	I1213 08:57:13.557607   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.557614   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:13.557622   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:13.557678   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:13.582015   53550 cri.go:89] found id: ""
	I1213 08:57:13.582029   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.582036   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:13.582041   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:13.582101   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:13.606414   53550 cri.go:89] found id: ""
	I1213 08:57:13.606430   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.606437   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:13.606442   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:13.606501   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:13.633258   53550 cri.go:89] found id: ""
	I1213 08:57:13.633271   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.633278   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:13.633283   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:13.633347   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:13.657138   53550 cri.go:89] found id: ""
	I1213 08:57:13.657151   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.657158   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:13.657163   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:13.657220   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:13.680740   53550 cri.go:89] found id: ""
	I1213 08:57:13.680754   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.680760   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:13.680766   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:13.680821   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:13.704953   53550 cri.go:89] found id: ""
	I1213 08:57:13.704966   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.704973   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:13.704981   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:13.704992   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:13.770673   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:13.760145   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.760885   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762410   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762907   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.764469   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:13.760145   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.760885   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762410   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762907   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.764469   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:13.770683   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:13.770696   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:13.840896   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:13.840915   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:13.870203   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:13.870219   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:13.927703   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:13.927721   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:16.440308   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:16.450569   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:16.450632   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:16.477483   53550 cri.go:89] found id: ""
	I1213 08:57:16.477497   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.477503   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:16.477508   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:16.477565   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:16.502333   53550 cri.go:89] found id: ""
	I1213 08:57:16.502347   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.502354   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:16.502369   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:16.502428   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:16.532266   53550 cri.go:89] found id: ""
	I1213 08:57:16.532282   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.532288   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:16.532293   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:16.532350   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:16.560396   53550 cri.go:89] found id: ""
	I1213 08:57:16.560410   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.560417   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:16.560422   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:16.560478   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:16.588855   53550 cri.go:89] found id: ""
	I1213 08:57:16.588868   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.588875   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:16.588881   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:16.588940   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:16.613011   53550 cri.go:89] found id: ""
	I1213 08:57:16.613024   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.613031   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:16.613036   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:16.613093   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:16.637627   53550 cri.go:89] found id: ""
	I1213 08:57:16.637641   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.637648   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:16.637655   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:16.637665   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:16.694489   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:16.694506   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:16.705456   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:16.705471   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:16.774554   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:16.762987   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.763565   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765176   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765796   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.767462   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:16.762987   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.763565   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765176   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765796   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.767462   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:16.774565   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:16.774577   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:16.840799   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:16.840818   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:19.370819   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:19.380996   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:19.381057   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:19.405680   53550 cri.go:89] found id: ""
	I1213 08:57:19.405694   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.405701   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:19.405707   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:19.405765   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:19.434562   53550 cri.go:89] found id: ""
	I1213 08:57:19.434575   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.434583   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:19.434588   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:19.434645   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:19.460752   53550 cri.go:89] found id: ""
	I1213 08:57:19.460765   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.460772   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:19.460777   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:19.460833   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:19.486494   53550 cri.go:89] found id: ""
	I1213 08:57:19.486508   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.486515   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:19.486520   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:19.486580   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:19.515809   53550 cri.go:89] found id: ""
	I1213 08:57:19.515824   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.515830   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:19.515835   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:19.515892   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:19.541206   53550 cri.go:89] found id: ""
	I1213 08:57:19.541219   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.541226   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:19.541231   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:19.541298   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:19.565992   53550 cri.go:89] found id: ""
	I1213 08:57:19.566005   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.566012   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:19.566020   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:19.566030   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:19.593821   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:19.593836   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:19.650142   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:19.650161   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:19.660963   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:19.660978   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:19.726595   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:19.718869   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.719407   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.720893   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.721354   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.722818   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:19.718869   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.719407   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.720893   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.721354   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.722818   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:19.726604   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:19.726615   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:22.290630   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:22.300536   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:22.300595   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:22.323650   53550 cri.go:89] found id: ""
	I1213 08:57:22.323663   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.323670   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:22.323675   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:22.323738   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:22.346879   53550 cri.go:89] found id: ""
	I1213 08:57:22.346892   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.346899   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:22.346904   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:22.346958   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:22.370613   53550 cri.go:89] found id: ""
	I1213 08:57:22.370627   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.370633   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:22.370638   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:22.370695   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:22.397037   53550 cri.go:89] found id: ""
	I1213 08:57:22.397051   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.397057   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:22.397062   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:22.397120   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:22.420786   53550 cri.go:89] found id: ""
	I1213 08:57:22.420799   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.420806   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:22.420811   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:22.420873   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:22.445029   53550 cri.go:89] found id: ""
	I1213 08:57:22.445043   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.445050   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:22.445056   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:22.445112   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:22.468674   53550 cri.go:89] found id: ""
	I1213 08:57:22.468688   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.468694   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:22.468702   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:22.468712   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:22.495304   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:22.495322   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:22.552462   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:22.552479   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:22.562826   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:22.562841   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:22.622604   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:22.615033   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.615580   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.616785   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.617360   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.618835   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:22.615033   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.615580   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.616785   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.617360   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.618835   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:22.622614   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:22.622625   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:25.187376   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:25.197281   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:25.197340   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:25.224829   53550 cri.go:89] found id: ""
	I1213 08:57:25.224843   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.224850   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:25.224855   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:25.224914   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:25.253288   53550 cri.go:89] found id: ""
	I1213 08:57:25.253303   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.253310   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:25.253315   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:25.253371   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:25.277253   53550 cri.go:89] found id: ""
	I1213 08:57:25.277267   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.277274   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:25.277279   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:25.277338   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:25.303815   53550 cri.go:89] found id: ""
	I1213 08:57:25.303828   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.303835   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:25.303840   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:25.303901   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:25.328041   53550 cri.go:89] found id: ""
	I1213 08:57:25.328054   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.328060   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:25.328065   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:25.328123   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:25.356334   53550 cri.go:89] found id: ""
	I1213 08:57:25.356348   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.356355   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:25.356369   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:25.356424   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:25.380096   53550 cri.go:89] found id: ""
	I1213 08:57:25.380110   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.380116   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:25.380124   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:25.380134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:25.439426   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:25.439444   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:25.449905   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:25.449921   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:25.512900   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:25.504371   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.504993   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.506701   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.507271   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.508895   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:25.504371   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.504993   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.506701   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.507271   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.508895   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:25.512910   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:25.512920   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:25.575756   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:25.575775   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:28.103479   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:28.113820   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:28.113880   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:28.139011   53550 cri.go:89] found id: ""
	I1213 08:57:28.139026   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.139033   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:28.139038   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:28.139097   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:28.169622   53550 cri.go:89] found id: ""
	I1213 08:57:28.169635   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.169642   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:28.169647   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:28.169707   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:28.197421   53550 cri.go:89] found id: ""
	I1213 08:57:28.197436   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.197443   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:28.197448   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:28.197504   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:28.221931   53550 cri.go:89] found id: ""
	I1213 08:57:28.221945   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.221952   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:28.221957   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:28.222019   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:28.245719   53550 cri.go:89] found id: ""
	I1213 08:57:28.245732   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.245739   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:28.245744   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:28.245801   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:28.273087   53550 cri.go:89] found id: ""
	I1213 08:57:28.273101   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.273108   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:28.273113   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:28.273170   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:28.299359   53550 cri.go:89] found id: ""
	I1213 08:57:28.299372   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.299379   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:28.299388   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:28.299398   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:28.355178   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:28.355195   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:28.365905   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:28.365921   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:28.430892   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:28.422372   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.423034   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.424584   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.425047   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.426553   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:28.422372   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.423034   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.424584   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.425047   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.426553   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:28.430909   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:28.430919   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:28.493985   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:28.494008   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:31.028636   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:31.039540   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:31.039600   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:31.067565   53550 cri.go:89] found id: ""
	I1213 08:57:31.067579   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.067586   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:31.067591   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:31.067649   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:31.103967   53550 cri.go:89] found id: ""
	I1213 08:57:31.103994   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.104001   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:31.104006   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:31.104072   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:31.128428   53550 cri.go:89] found id: ""
	I1213 08:57:31.128455   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.128462   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:31.128467   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:31.128535   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:31.157837   53550 cri.go:89] found id: ""
	I1213 08:57:31.157851   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.157857   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:31.157864   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:31.157920   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:31.182139   53550 cri.go:89] found id: ""
	I1213 08:57:31.182153   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.182160   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:31.182165   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:31.182221   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:31.206203   53550 cri.go:89] found id: ""
	I1213 08:57:31.206217   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.206224   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:31.206229   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:31.206284   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:31.230290   53550 cri.go:89] found id: ""
	I1213 08:57:31.230304   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.230311   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:31.230319   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:31.230335   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:31.240760   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:31.240775   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:31.306114   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:31.297892   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.298559   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300121   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300425   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.301865   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:31.297892   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.298559   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300121   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300425   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.301865   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:31.306123   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:31.306134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:31.372771   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:31.372790   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:31.402327   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:31.402342   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:33.959197   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:33.969353   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:33.969420   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:33.994169   53550 cri.go:89] found id: ""
	I1213 08:57:33.994183   53550 logs.go:282] 0 containers: []
	W1213 08:57:33.994190   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:33.994195   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:33.994253   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:34.022338   53550 cri.go:89] found id: ""
	I1213 08:57:34.022367   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.022375   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:34.022380   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:34.022457   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:34.055485   53550 cri.go:89] found id: ""
	I1213 08:57:34.055547   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.055563   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:34.055569   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:34.055645   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:34.085397   53550 cri.go:89] found id: ""
	I1213 08:57:34.085411   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.085419   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:34.085424   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:34.085487   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:34.112540   53550 cri.go:89] found id: ""
	I1213 08:57:34.112553   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.112561   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:34.112566   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:34.112622   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:34.137911   53550 cri.go:89] found id: ""
	I1213 08:57:34.137934   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.137942   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:34.137947   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:34.138013   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:34.165184   53550 cri.go:89] found id: ""
	I1213 08:57:34.165197   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.165204   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:34.165213   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:34.165224   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:34.221937   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:34.221954   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:34.232900   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:34.232925   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:34.299398   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:34.291492   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.291917   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.293598   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.294066   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.295548   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:34.291492   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.291917   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.293598   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.294066   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.295548   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:34.299409   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:34.299422   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:34.362086   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:34.362104   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:36.894643   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:36.904509   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:36.904571   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:36.928971   53550 cri.go:89] found id: ""
	I1213 08:57:36.928986   53550 logs.go:282] 0 containers: []
	W1213 08:57:36.928993   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:36.928998   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:36.929055   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:36.963924   53550 cri.go:89] found id: ""
	I1213 08:57:36.963938   53550 logs.go:282] 0 containers: []
	W1213 08:57:36.963945   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:36.963956   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:36.964015   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:36.989352   53550 cri.go:89] found id: ""
	I1213 08:57:36.989366   53550 logs.go:282] 0 containers: []
	W1213 08:57:36.989373   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:36.989378   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:36.989435   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:37.022947   53550 cri.go:89] found id: ""
	I1213 08:57:37.022973   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.022982   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:37.022987   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:37.023065   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:37.058627   53550 cri.go:89] found id: ""
	I1213 08:57:37.058642   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.058649   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:37.058654   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:37.058711   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:37.093026   53550 cri.go:89] found id: ""
	I1213 08:57:37.093047   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.093054   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:37.093059   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:37.093127   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:37.119099   53550 cri.go:89] found id: ""
	I1213 08:57:37.119113   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.119120   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:37.119127   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:37.119142   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:37.129746   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:37.129770   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:37.192251   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:37.184204   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.185001   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186648   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186953   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.188434   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:37.184204   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.185001   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186648   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186953   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.188434   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:37.192263   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:37.192274   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:37.258678   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:37.258697   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:37.286406   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:37.286421   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:39.843274   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:39.853155   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:39.853221   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:39.880613   53550 cri.go:89] found id: ""
	I1213 08:57:39.880627   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.880634   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:39.880639   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:39.880695   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:39.908166   53550 cri.go:89] found id: ""
	I1213 08:57:39.908179   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.908191   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:39.908197   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:39.908255   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:39.931780   53550 cri.go:89] found id: ""
	I1213 08:57:39.931803   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.931811   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:39.931816   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:39.931885   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:39.959597   53550 cri.go:89] found id: ""
	I1213 08:57:39.959610   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.959617   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:39.959622   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:39.959678   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:39.987876   53550 cri.go:89] found id: ""
	I1213 08:57:39.987889   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.987896   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:39.987901   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:39.987955   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:40.032588   53550 cri.go:89] found id: ""
	I1213 08:57:40.032603   53550 logs.go:282] 0 containers: []
	W1213 08:57:40.032610   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:40.032615   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:40.032675   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:40.061908   53550 cri.go:89] found id: ""
	I1213 08:57:40.061922   53550 logs.go:282] 0 containers: []
	W1213 08:57:40.061929   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:40.061937   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:40.061947   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:40.126971   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:40.126990   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:40.143091   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:40.143107   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:40.207107   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:40.198855   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.199548   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201367   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201930   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.203470   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:40.198855   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.199548   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201367   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201930   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.203470   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:40.207117   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:40.207127   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:40.276818   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:40.276842   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:42.806068   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:42.816147   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:42.816212   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:42.844268   53550 cri.go:89] found id: ""
	I1213 08:57:42.844281   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.844288   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:42.844294   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:42.844353   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:42.869114   53550 cri.go:89] found id: ""
	I1213 08:57:42.869127   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.869134   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:42.869139   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:42.869195   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:42.892972   53550 cri.go:89] found id: ""
	I1213 08:57:42.892986   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.892993   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:42.892998   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:42.893072   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:42.916620   53550 cri.go:89] found id: ""
	I1213 08:57:42.916633   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.916640   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:42.916646   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:42.916702   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:42.940313   53550 cri.go:89] found id: ""
	I1213 08:57:42.940327   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.940334   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:42.940339   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:42.940394   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:42.965365   53550 cri.go:89] found id: ""
	I1213 08:57:42.965379   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.965386   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:42.965391   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:42.965451   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:42.990702   53550 cri.go:89] found id: ""
	I1213 08:57:42.990715   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.990722   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:42.990729   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:42.990742   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:43.048989   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:43.049008   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:43.061818   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:43.061836   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:43.129375   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:43.121071   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.121717   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.123392   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.124082   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.125650   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:43.121071   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.121717   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.123392   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.124082   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.125650   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:43.129386   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:43.129396   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:43.191354   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:43.191373   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:45.723775   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:45.733853   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:45.733913   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:45.765626   53550 cri.go:89] found id: ""
	I1213 08:57:45.765639   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.765646   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:45.765652   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:45.765713   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:45.793721   53550 cri.go:89] found id: ""
	I1213 08:57:45.793734   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.793741   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:45.793746   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:45.793802   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:45.822307   53550 cri.go:89] found id: ""
	I1213 08:57:45.822320   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.822341   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:45.822347   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:45.822411   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:45.851368   53550 cri.go:89] found id: ""
	I1213 08:57:45.851382   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.851390   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:45.851395   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:45.851454   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:45.877295   53550 cri.go:89] found id: ""
	I1213 08:57:45.877308   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.877321   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:45.877326   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:45.877382   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:45.905661   53550 cri.go:89] found id: ""
	I1213 08:57:45.905674   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.905681   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:45.905686   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:45.905745   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:45.934028   53550 cri.go:89] found id: ""
	I1213 08:57:45.934042   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.934050   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:45.934058   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:45.934068   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:45.962148   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:45.962164   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:46.017986   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:46.018005   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:46.031923   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:46.031939   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:46.106367   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:46.098570   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.099183   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.100789   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.101326   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.102477   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:46.098570   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.099183   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.100789   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.101326   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.102477   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:46.106379   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:46.106389   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:48.670805   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:48.680874   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:48.680935   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:48.704942   53550 cri.go:89] found id: ""
	I1213 08:57:48.704955   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.704962   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:48.704968   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:48.705029   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:48.729965   53550 cri.go:89] found id: ""
	I1213 08:57:48.729979   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.729986   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:48.729991   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:48.730048   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:48.754712   53550 cri.go:89] found id: ""
	I1213 08:57:48.754726   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.754733   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:48.754739   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:48.754798   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:48.786991   53550 cri.go:89] found id: ""
	I1213 08:57:48.787014   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.787021   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:48.787026   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:48.787082   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:48.812918   53550 cri.go:89] found id: ""
	I1213 08:57:48.812932   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.812939   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:48.812943   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:48.813010   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:48.841512   53550 cri.go:89] found id: ""
	I1213 08:57:48.841525   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.841533   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:48.841538   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:48.841597   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:48.866500   53550 cri.go:89] found id: ""
	I1213 08:57:48.866514   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.866521   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:48.866529   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:48.866539   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:48.922975   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:48.922993   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:48.933525   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:48.933540   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:48.995831   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:48.987715   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.988587   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990160   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990461   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.991987   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:48.987715   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.988587   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990160   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990461   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.991987   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:48.995841   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:48.995852   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:49.061866   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:49.061885   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:51.594845   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:51.606962   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:51.607021   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:51.630371   53550 cri.go:89] found id: ""
	I1213 08:57:51.630390   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.630397   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:51.630402   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:51.630456   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:51.655753   53550 cri.go:89] found id: ""
	I1213 08:57:51.655768   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.655775   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:51.655780   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:51.655835   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:51.680116   53550 cri.go:89] found id: ""
	I1213 08:57:51.680130   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.680136   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:51.680142   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:51.680199   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:51.703715   53550 cri.go:89] found id: ""
	I1213 08:57:51.703728   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.703734   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:51.703739   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:51.703798   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:51.728242   53550 cri.go:89] found id: ""
	I1213 08:57:51.728257   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.728263   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:51.728268   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:51.728334   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:51.752764   53550 cri.go:89] found id: ""
	I1213 08:57:51.752777   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.752783   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:51.752788   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:51.752845   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:51.776542   53550 cri.go:89] found id: ""
	I1213 08:57:51.776556   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.776562   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:51.776570   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:51.776583   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:51.809113   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:51.809129   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:51.868930   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:51.868948   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:51.879570   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:51.879594   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:51.948757   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:51.940427   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.940848   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.942781   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.943205   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.944832   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:51.940427   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.940848   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.942781   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.943205   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.944832   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:51.948767   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:51.948777   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:54.516634   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:54.526661   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:54.526738   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:54.553106   53550 cri.go:89] found id: ""
	I1213 08:57:54.553120   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.553126   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:54.553132   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:54.553190   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:54.581404   53550 cri.go:89] found id: ""
	I1213 08:57:54.581417   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.581426   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:54.581430   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:54.581484   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:54.605783   53550 cri.go:89] found id: ""
	I1213 08:57:54.605796   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.605803   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:54.605807   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:54.605862   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:54.634146   53550 cri.go:89] found id: ""
	I1213 08:57:54.634160   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.634167   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:54.634171   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:54.634227   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:54.658720   53550 cri.go:89] found id: ""
	I1213 08:57:54.658734   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.658741   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:54.658746   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:54.658803   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:54.683926   53550 cri.go:89] found id: ""
	I1213 08:57:54.683940   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.683947   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:54.683952   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:54.684011   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:54.712272   53550 cri.go:89] found id: ""
	I1213 08:57:54.712286   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.712293   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:54.712300   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:54.712312   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:54.769590   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:54.769607   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:54.781369   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:54.781386   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:54.846793   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:54.837938   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.838582   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840117   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840708   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.842407   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:54.837938   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.838582   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840117   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840708   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.842407   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:54.846803   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:54.846813   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:54.913758   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:54.913778   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:57.444332   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:57.453993   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:57.454058   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:57.478195   53550 cri.go:89] found id: ""
	I1213 08:57:57.478209   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.478225   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:57.478231   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:57.478301   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:57.502242   53550 cri.go:89] found id: ""
	I1213 08:57:57.502269   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.502277   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:57.502282   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:57.502346   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:57.525845   53550 cri.go:89] found id: ""
	I1213 08:57:57.525859   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.525867   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:57.525872   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:57.525931   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:57.549123   53550 cri.go:89] found id: ""
	I1213 08:57:57.549137   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.549143   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:57.549148   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:57.549203   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:57.576988   53550 cri.go:89] found id: ""
	I1213 08:57:57.577002   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.577009   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:57.577019   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:57.577076   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:57.599837   53550 cri.go:89] found id: ""
	I1213 08:57:57.599851   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.599858   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:57.599864   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:57.599932   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:57.623671   53550 cri.go:89] found id: ""
	I1213 08:57:57.623685   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.623693   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:57.623700   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:57.623711   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:57.634031   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:57.634046   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:57.695658   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:57.687110   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.688029   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.689755   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.690097   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.691681   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:57.687110   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.688029   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.689755   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.690097   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.691681   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:57.695668   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:57.695678   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:57.762393   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:57.762412   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:57.790711   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:57.790726   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:00.355817   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:00.372076   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:00.372142   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:00.409377   53550 cri.go:89] found id: ""
	I1213 08:58:00.409392   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.409398   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:00.409404   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:00.409467   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:00.436239   53550 cri.go:89] found id: ""
	I1213 08:58:00.436254   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.436261   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:00.436266   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:00.436326   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:00.461909   53550 cri.go:89] found id: ""
	I1213 08:58:00.461922   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.461929   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:00.461934   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:00.461991   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:00.491257   53550 cri.go:89] found id: ""
	I1213 08:58:00.491270   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.491276   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:00.491281   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:00.491339   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:00.517632   53550 cri.go:89] found id: ""
	I1213 08:58:00.517646   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.517658   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:00.517664   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:00.517726   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:00.543370   53550 cri.go:89] found id: ""
	I1213 08:58:00.543384   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.543391   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:00.543396   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:00.543460   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:00.568967   53550 cri.go:89] found id: ""
	I1213 08:58:00.568980   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.568987   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:00.568995   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:00.569005   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:00.636984   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:00.628336   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629304   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629910   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.631578   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.632039   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:00.628336   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629304   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629910   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.631578   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.632039   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:00.636994   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:00.637006   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:00.699893   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:00.699911   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:00.730182   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:00.730198   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:00.787828   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:00.787847   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:03.298762   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:03.310337   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:03.310399   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:03.345482   53550 cri.go:89] found id: ""
	I1213 08:58:03.345496   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.345503   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:03.345508   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:03.345568   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:03.370651   53550 cri.go:89] found id: ""
	I1213 08:58:03.370664   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.370671   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:03.370676   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:03.370730   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:03.393554   53550 cri.go:89] found id: ""
	I1213 08:58:03.393568   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.393574   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:03.393580   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:03.393638   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:03.418084   53550 cri.go:89] found id: ""
	I1213 08:58:03.418098   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.418105   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:03.418110   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:03.418180   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:03.442426   53550 cri.go:89] found id: ""
	I1213 08:58:03.442440   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.442447   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:03.442451   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:03.442510   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:03.467378   53550 cri.go:89] found id: ""
	I1213 08:58:03.467391   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.467398   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:03.467404   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:03.467539   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:03.493640   53550 cri.go:89] found id: ""
	I1213 08:58:03.493653   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.493660   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:03.493668   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:03.493678   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:03.559295   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:03.551013   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.551897   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553496   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553816   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.555334   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:03.551013   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.551897   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553496   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553816   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.555334   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:03.559305   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:03.559315   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:03.622616   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:03.622633   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:03.656517   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:03.656534   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:03.715111   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:03.715131   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:06.226614   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:06.237139   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:06.237200   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:06.261636   53550 cri.go:89] found id: ""
	I1213 08:58:06.261652   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.261659   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:06.261664   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:06.261727   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:06.293692   53550 cri.go:89] found id: ""
	I1213 08:58:06.293707   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.293714   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:06.293719   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:06.293778   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:06.321565   53550 cri.go:89] found id: ""
	I1213 08:58:06.321578   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.321584   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:06.321589   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:06.321643   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:06.348809   53550 cri.go:89] found id: ""
	I1213 08:58:06.348856   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.348862   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:06.348869   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:06.348925   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:06.378146   53550 cri.go:89] found id: ""
	I1213 08:58:06.378159   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.378166   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:06.378171   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:06.378227   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:06.402993   53550 cri.go:89] found id: ""
	I1213 08:58:06.403006   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.403013   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:06.403019   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:06.403074   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:06.429062   53550 cri.go:89] found id: ""
	I1213 08:58:06.429076   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.429084   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:06.429092   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:06.429102   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:06.485200   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:06.485218   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:06.496017   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:06.496033   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:06.561266   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:06.552621   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.553412   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.554963   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.555537   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.557278   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:06.552621   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.553412   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.554963   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.555537   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.557278   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:06.561275   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:06.561299   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:06.624429   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:06.624451   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:09.152326   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:09.162496   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:09.162552   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:09.187570   53550 cri.go:89] found id: ""
	I1213 08:58:09.187583   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.187590   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:09.187595   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:09.187653   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:09.211361   53550 cri.go:89] found id: ""
	I1213 08:58:09.211375   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.211382   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:09.211387   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:09.211441   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:09.240289   53550 cri.go:89] found id: ""
	I1213 08:58:09.240302   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.240310   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:09.240316   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:09.240381   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:09.263680   53550 cri.go:89] found id: ""
	I1213 08:58:09.263694   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.263701   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:09.263706   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:09.263767   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:09.289437   53550 cri.go:89] found id: ""
	I1213 08:58:09.289451   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.289458   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:09.289463   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:09.289524   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:09.323385   53550 cri.go:89] found id: ""
	I1213 08:58:09.323398   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.323405   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:09.323410   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:09.323467   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:09.353577   53550 cri.go:89] found id: ""
	I1213 08:58:09.353590   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.353597   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:09.353605   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:09.353616   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:09.382787   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:09.382803   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:09.449042   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:09.449060   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:09.460226   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:09.460242   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:09.528091   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:09.519747   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.520489   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522369   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522869   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.524292   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:09.519747   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.520489   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522369   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522869   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.524292   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:09.528102   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:09.528112   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:12.097937   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:12.108009   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:12.108068   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:12.131531   53550 cri.go:89] found id: ""
	I1213 08:58:12.131546   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.131553   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:12.131558   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:12.131621   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:12.161149   53550 cri.go:89] found id: ""
	I1213 08:58:12.161163   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.161170   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:12.161175   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:12.161237   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:12.187318   53550 cri.go:89] found id: ""
	I1213 08:58:12.187332   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.187339   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:12.187344   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:12.187400   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:12.212736   53550 cri.go:89] found id: ""
	I1213 08:58:12.212749   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.212756   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:12.212761   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:12.212818   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:12.236946   53550 cri.go:89] found id: ""
	I1213 08:58:12.236959   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.236967   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:12.236973   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:12.237036   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:12.260663   53550 cri.go:89] found id: ""
	I1213 08:58:12.260677   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.260683   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:12.260690   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:12.260746   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:12.292004   53550 cri.go:89] found id: ""
	I1213 08:58:12.292022   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.292030   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:12.292038   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:12.292055   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:12.338118   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:12.338134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:12.397489   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:12.397527   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:12.408810   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:12.408834   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:12.471195   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:12.462890   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.463457   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465096   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465641   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.467207   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:12.462890   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.463457   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465096   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465641   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.467207   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:12.471207   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:12.471217   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:15.035075   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:15.046491   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:15.046557   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:15.073355   53550 cri.go:89] found id: ""
	I1213 08:58:15.073368   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.073375   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:15.073381   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:15.073444   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:15.098531   53550 cri.go:89] found id: ""
	I1213 08:58:15.098545   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.098553   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:15.098558   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:15.098620   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:15.125009   53550 cri.go:89] found id: ""
	I1213 08:58:15.125024   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.125031   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:15.125036   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:15.125096   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:15.150565   53550 cri.go:89] found id: ""
	I1213 08:58:15.150579   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.150586   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:15.150591   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:15.150650   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:15.176538   53550 cri.go:89] found id: ""
	I1213 08:58:15.176552   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.176559   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:15.176564   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:15.176622   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:15.200435   53550 cri.go:89] found id: ""
	I1213 08:58:15.200449   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.200472   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:15.200477   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:15.200554   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:15.224596   53550 cri.go:89] found id: ""
	I1213 08:58:15.224610   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.224617   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:15.224625   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:15.224636   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:15.299267   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:15.288531   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.289386   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292038   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292350   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.293775   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:15.288531   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.289386   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292038   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292350   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.293775   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:15.299277   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:15.299287   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:15.370114   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:15.370160   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:15.400555   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:15.400569   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:15.458044   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:15.458062   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:17.970291   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:17.980756   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:17.980816   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:18.009453   53550 cri.go:89] found id: ""
	I1213 08:58:18.009470   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.009478   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:18.009483   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:18.009912   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:18.040545   53550 cri.go:89] found id: ""
	I1213 08:58:18.040560   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.040567   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:18.040572   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:18.040634   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:18.065694   53550 cri.go:89] found id: ""
	I1213 08:58:18.065711   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.065721   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:18.065727   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:18.065795   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:18.091133   53550 cri.go:89] found id: ""
	I1213 08:58:18.091147   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.091155   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:18.091169   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:18.091228   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:18.118236   53550 cri.go:89] found id: ""
	I1213 08:58:18.118250   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.118257   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:18.118262   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:18.118321   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:18.141948   53550 cri.go:89] found id: ""
	I1213 08:58:18.141961   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.141968   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:18.141974   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:18.142030   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:18.167116   53550 cri.go:89] found id: ""
	I1213 08:58:18.167130   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.167137   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:18.167145   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:18.167158   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:18.242811   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:18.234010   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.235596   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.236202   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237378   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237833   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:18.234010   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.235596   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.236202   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237378   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237833   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:18.242822   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:18.242833   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:18.314955   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:18.314974   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:18.343207   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:18.343222   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:18.398868   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:18.398887   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:20.911155   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:20.921270   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:20.921329   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:20.949336   53550 cri.go:89] found id: ""
	I1213 08:58:20.949350   53550 logs.go:282] 0 containers: []
	W1213 08:58:20.949356   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:20.949361   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:20.949418   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:20.973382   53550 cri.go:89] found id: ""
	I1213 08:58:20.973395   53550 logs.go:282] 0 containers: []
	W1213 08:58:20.973402   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:20.973408   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:20.973470   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:21.009413   53550 cri.go:89] found id: ""
	I1213 08:58:21.009431   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.009439   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:21.009444   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:21.009508   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:21.038840   53550 cri.go:89] found id: ""
	I1213 08:58:21.038898   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.038906   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:21.038913   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:21.038981   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:21.062283   53550 cri.go:89] found id: ""
	I1213 08:58:21.062296   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.062303   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:21.062308   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:21.062430   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:21.086629   53550 cri.go:89] found id: ""
	I1213 08:58:21.086643   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.086650   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:21.086655   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:21.086725   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:21.113708   53550 cri.go:89] found id: ""
	I1213 08:58:21.113722   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.113729   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:21.113737   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:21.113749   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:21.169462   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:21.169481   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:21.180306   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:21.180328   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:21.242376   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:21.233989   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.234781   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236321   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236625   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.238091   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:21.233989   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.234781   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236321   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236625   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.238091   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:21.242386   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:21.242400   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:21.306044   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:21.306063   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:23.838510   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:23.848550   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:23.848611   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:23.878675   53550 cri.go:89] found id: ""
	I1213 08:58:23.878689   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.878697   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:23.878702   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:23.878770   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:23.904045   53550 cri.go:89] found id: ""
	I1213 08:58:23.904060   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.904067   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:23.904072   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:23.904142   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:23.929949   53550 cri.go:89] found id: ""
	I1213 08:58:23.929963   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.929970   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:23.929975   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:23.930035   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:23.955048   53550 cri.go:89] found id: ""
	I1213 08:58:23.955062   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.955069   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:23.955078   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:23.955136   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:23.979633   53550 cri.go:89] found id: ""
	I1213 08:58:23.979647   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.979654   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:23.979659   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:23.979716   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:24.006479   53550 cri.go:89] found id: ""
	I1213 08:58:24.006495   53550 logs.go:282] 0 containers: []
	W1213 08:58:24.006503   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:24.006520   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:24.006593   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:24.033349   53550 cri.go:89] found id: ""
	I1213 08:58:24.033369   53550 logs.go:282] 0 containers: []
	W1213 08:58:24.033376   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:24.033385   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:24.033395   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:24.060616   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:24.060635   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:24.119305   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:24.119324   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:24.130335   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:24.130350   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:24.197036   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:24.188772   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.189749   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191420   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191829   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.193342   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:24.188772   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.189749   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191420   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191829   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.193342   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:24.197046   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:24.197058   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:26.764306   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:26.775859   53550 kubeadm.go:602] duration metric: took 4m4.554296141s to restartPrimaryControlPlane
	W1213 08:58:26.775922   53550 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 08:58:26.776056   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 08:58:27.191363   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:58:27.204546   53550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:58:27.212501   53550 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 08:58:27.212553   53550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:58:27.220364   53550 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 08:58:27.220373   53550 kubeadm.go:158] found existing configuration files:
	
	I1213 08:58:27.220423   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 08:58:27.228123   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 08:58:27.228179   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 08:58:27.235737   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 08:58:27.243839   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 08:58:27.243909   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:58:27.251406   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 08:58:27.259128   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 08:58:27.259197   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:58:27.266406   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 08:58:27.274290   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 08:58:27.274347   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:58:27.281913   53550 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 08:58:27.321302   53550 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 08:58:27.321349   53550 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 08:58:27.394605   53550 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 08:58:27.394672   53550 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 08:58:27.394706   53550 kubeadm.go:319] OS: Linux
	I1213 08:58:27.394750   53550 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 08:58:27.394798   53550 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 08:58:27.394844   53550 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 08:58:27.394891   53550 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 08:58:27.394938   53550 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 08:58:27.394984   53550 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 08:58:27.395028   53550 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 08:58:27.395075   53550 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 08:58:27.395120   53550 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 08:58:27.462440   53550 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 08:58:27.462546   53550 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 08:58:27.462635   53550 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 08:58:27.476078   53550 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 08:58:27.481288   53550 out.go:252]   - Generating certificates and keys ...
	I1213 08:58:27.481378   53550 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 08:58:27.481454   53550 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 08:58:27.481542   53550 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 08:58:27.481611   53550 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 08:58:27.481690   53550 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 08:58:27.481750   53550 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 08:58:27.481822   53550 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 08:58:27.481892   53550 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 08:58:27.481974   53550 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 08:58:27.482055   53550 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 08:58:27.482101   53550 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 08:58:27.482165   53550 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 08:58:27.905850   53550 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 08:58:28.178703   53550 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 08:58:28.541521   53550 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 08:58:28.686915   53550 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 08:58:29.281245   53550 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 08:58:29.281953   53550 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 08:58:29.285342   53550 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 08:58:29.288544   53550 out.go:252]   - Booting up control plane ...
	I1213 08:58:29.288640   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 08:58:29.288718   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 08:58:29.289378   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 08:58:29.310312   53550 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 08:58:29.310629   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 08:58:29.318324   53550 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 08:58:29.318581   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 08:58:29.318622   53550 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 08:58:29.457400   53550 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 08:58:29.457506   53550 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:02:29.458561   53550 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001216357s
	I1213 09:02:29.458592   53550 kubeadm.go:319] 
	I1213 09:02:29.458674   53550 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:02:29.458746   53550 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:02:29.458876   53550 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:02:29.458882   53550 kubeadm.go:319] 
	I1213 09:02:29.458995   53550 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:02:29.459029   53550 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:02:29.459061   53550 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:02:29.459065   53550 kubeadm.go:319] 
	I1213 09:02:29.463013   53550 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:02:29.463412   53550 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:02:29.463534   53550 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:02:29.463755   53550 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:02:29.463760   53550 kubeadm.go:319] 
	I1213 09:02:29.463824   53550 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 09:02:29.463944   53550 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216357s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 09:02:29.464028   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 09:02:29.874512   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:02:29.888184   53550 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:02:29.888240   53550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:02:29.896053   53550 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:02:29.896063   53550 kubeadm.go:158] found existing configuration files:
	
	I1213 09:02:29.896114   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:02:29.904008   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:02:29.904062   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:02:29.911453   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:02:29.919369   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:02:29.919421   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:02:29.927024   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:02:29.934996   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:02:29.935050   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:02:29.942367   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:02:29.949946   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:02:29.950000   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:02:29.957647   53550 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:02:29.995750   53550 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:02:29.995800   53550 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:02:30.116553   53550 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:02:30.116615   53550 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 09:02:30.116649   53550 kubeadm.go:319] OS: Linux
	I1213 09:02:30.116693   53550 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:02:30.116740   53550 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:02:30.116785   53550 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:02:30.116832   53550 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:02:30.116879   53550 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:02:30.116934   53550 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:02:30.116978   53550 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:02:30.117024   53550 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:02:30.117071   53550 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:02:30.188905   53550 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:02:30.189016   53550 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:02:30.189118   53550 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:02:30.196039   53550 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:02:30.201335   53550 out.go:252]   - Generating certificates and keys ...
	I1213 09:02:30.201440   53550 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:02:30.201521   53550 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:02:30.201609   53550 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:02:30.201670   53550 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:02:30.201747   53550 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:02:30.201835   53550 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:02:30.201908   53550 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:02:30.201970   53550 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:02:30.202045   53550 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:02:30.202116   53550 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:02:30.202153   53550 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:02:30.202209   53550 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:02:30.255550   53550 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:02:30.417221   53550 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:02:30.868435   53550 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:02:31.140633   53550 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:02:31.298069   53550 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:02:31.298995   53550 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:02:31.302412   53550 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:02:31.305750   53550 out.go:252]   - Booting up control plane ...
	I1213 09:02:31.305854   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:02:31.305930   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:02:31.305995   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:02:31.327053   53550 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:02:31.327169   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:02:31.334414   53550 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:02:31.334677   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:02:31.334719   53550 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:02:31.474852   53550 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:02:31.474965   53550 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:06:31.473943   53550 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000237859s
	I1213 09:06:31.473980   53550 kubeadm.go:319] 
	I1213 09:06:31.474081   53550 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:06:31.474292   53550 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:06:31.474479   53550 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:06:31.474488   53550 kubeadm.go:319] 
	I1213 09:06:31.474674   53550 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:06:31.474967   53550 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:06:31.475021   53550 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:06:31.475025   53550 kubeadm.go:319] 
	I1213 09:06:31.479982   53550 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:06:31.480734   53550 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:06:31.480923   53550 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:06:31.481347   53550 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:06:31.481355   53550 kubeadm.go:319] 
	I1213 09:06:31.481475   53550 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:06:31.481540   53550 kubeadm.go:403] duration metric: took 12m9.29303151s to StartCluster
	I1213 09:06:31.481569   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:06:31.481637   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:06:31.505490   53550 cri.go:89] found id: ""
	I1213 09:06:31.505505   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.505511   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:06:31.505516   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:06:31.505576   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:06:31.533408   53550 cri.go:89] found id: ""
	I1213 09:06:31.533422   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.533429   53550 logs.go:284] No container was found matching "etcd"
	I1213 09:06:31.533433   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:06:31.533495   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:06:31.563195   53550 cri.go:89] found id: ""
	I1213 09:06:31.563218   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.563225   53550 logs.go:284] No container was found matching "coredns"
	I1213 09:06:31.563230   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:06:31.563288   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:06:31.588179   53550 cri.go:89] found id: ""
	I1213 09:06:31.588192   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.588199   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:06:31.588204   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:06:31.588262   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:06:31.613124   53550 cri.go:89] found id: ""
	I1213 09:06:31.613137   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.613144   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:06:31.613149   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:06:31.613204   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:06:31.637268   53550 cri.go:89] found id: ""
	I1213 09:06:31.637282   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.637297   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:06:31.637303   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:06:31.637360   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:06:31.661188   53550 cri.go:89] found id: ""
	I1213 09:06:31.661208   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.661214   53550 logs.go:284] No container was found matching "kindnet"
	I1213 09:06:31.661223   53550 logs.go:123] Gathering logs for container status ...
	I1213 09:06:31.661232   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:06:31.690241   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 09:06:31.690257   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:06:31.745899   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 09:06:31.745917   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:06:31.756123   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:06:31.756137   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:06:31.847485   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:06:31.839464   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.840155   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.841679   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.842167   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.843750   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:06:31.839464   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.840155   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.841679   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.842167   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.843750   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:06:31.847496   53550 logs.go:123] Gathering logs for containerd ...
	I1213 09:06:31.847506   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1213 09:06:31.908510   53550 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 09:06:31.908551   53550 out.go:285] * 
	W1213 09:06:31.908654   53550 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:06:31.908704   53550 out.go:285] * 
	W1213 09:06:31.910815   53550 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:06:31.916295   53550 out.go:203] 
	W1213 09:06:31.920097   53550 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:06:31.920144   53550 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 09:06:31.920163   53550 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 09:06:31.923856   53550 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604408682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604420440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604455862Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604474439Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604487756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604500244Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604511765Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604526116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604543576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604572007Z" level=info msg="Connect containerd service"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604858731Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.605398020Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621543154Z" level=info msg="Start subscribing containerd event"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621557350Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621826530Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621766673Z" level=info msg="Start recovering state"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663604117Z" level=info msg="Start event monitor"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663796472Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663882619Z" level=info msg="Start streaming server"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663973525Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664032545Z" level=info msg="runtime interface starting up..."
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664089374Z" level=info msg="starting plugins..."
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664157059Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 08:54:20 functional-074420 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.666240376Z" level=info msg="containerd successfully booted in 0.083219s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:33.919670   23090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:33.920064   23090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:33.924454   23090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:33.925149   23090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:33.926276   23090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 09:08:33 up 51 min,  0 user,  load average: 0.60, 0.28, 0.35
	Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:08:30 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:31 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 479.
	Dec 13 09:08:31 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:31 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:31 functional-074420 kubelet[22911]: E1213 09:08:31.328656   22911 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:31 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:31 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:32 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 13 09:08:32 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:32 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:32 functional-074420 kubelet[22939]: E1213 09:08:32.089779   22939 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:32 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:32 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:32 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 13 09:08:32 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:32 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:32 functional-074420 kubelet[22983]: E1213 09:08:32.837588   22983 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:32 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:32 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:33 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 13 09:08:33 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:33 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:33 functional-074420 kubelet[23005]: E1213 09:08:33.573015   23005 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:33 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:33 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 2 (358.41934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (2.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-074420 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-074420 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (54.419201ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-074420 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-074420 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-074420 describe po hello-node-connect: exit status 1 (56.597906ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-074420 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-074420 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-074420 logs -l app=hello-node-connect: exit status 1 (58.519936ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-074420 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-074420 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-074420 describe svc hello-node-connect: exit status 1 (67.024591ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-074420 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	        "Created": "2025-12-13T08:39:40.050933605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:39:40.114566965Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
	        "Name": "/functional-074420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	                "LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-074420",
	                "Source": "/var/lib/docker/volumes/functional-074420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074420",
	                "name.minikube.sigs.k8s.io": "functional-074420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
	            "SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e0:c5:f8:aa:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
	                    "EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074420",
	                        "662fb3d52b3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 2 (296.760374ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-074420 cache reload                                                                                                                               │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ ssh     │ functional-074420 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │ 13 Dec 25 08:54 UTC │
	│ kubectl │ functional-074420 kubectl -- --context functional-074420 get pods                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │                     │
	│ start   │ -p functional-074420 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:54 UTC │                     │
	│ config  │ functional-074420 config unset cpus                                                                                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ cp      │ functional-074420 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ config  │ functional-074420 config get cpus                                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │                     │
	│ config  │ functional-074420 config set cpus 2                                                                                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ config  │ functional-074420 config get cpus                                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ config  │ functional-074420 config unset cpus                                                                                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ config  │ functional-074420 config get cpus                                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │                     │
	│ ssh     │ functional-074420 ssh -n functional-074420 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ functional-074420 ssh echo hello                                                                                                                             │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ cp      │ functional-074420 cp functional-074420:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2412260855/001/cp-test.txt │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ functional-074420 ssh cat /etc/hostname                                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ functional-074420 ssh -n functional-074420 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ tunnel  │ functional-074420 tunnel --alsologtostderr                                                                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │                     │
	│ tunnel  │ functional-074420 tunnel --alsologtostderr                                                                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │                     │
	│ cp      │ functional-074420 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ tunnel  │ functional-074420 tunnel --alsologtostderr                                                                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │                     │
	│ ssh     │ functional-074420 ssh -n functional-074420 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ addons  │ functional-074420 addons list                                                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ addons  │ functional-074420 addons list -o json                                                                                                                        │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:54:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:54:17.881015   53550 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:54:17.881119   53550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:54:17.881124   53550 out.go:374] Setting ErrFile to fd 2...
	I1213 08:54:17.881127   53550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:54:17.881367   53550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:54:17.881711   53550 out.go:368] Setting JSON to false
	I1213 08:54:17.882486   53550 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2210,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:54:17.882543   53550 start.go:143] virtualization:  
	I1213 08:54:17.885916   53550 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:54:17.888999   53550 notify.go:221] Checking for updates...
	I1213 08:54:17.889435   53550 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:54:17.892383   53550 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:54:17.895200   53550 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:54:17.898042   53550 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:54:17.900839   53550 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 08:54:17.903626   53550 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:54:17.906955   53550 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:54:17.907037   53550 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:54:17.945038   53550 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:54:17.945157   53550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:54:18.004102   53550 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 08:54:17.99317471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:54:18.004214   53550 docker.go:319] overlay module found
	I1213 08:54:18.009730   53550 out.go:179] * Using the docker driver based on existing profile
	I1213 08:54:18.012694   53550 start.go:309] selected driver: docker
	I1213 08:54:18.012706   53550 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:54:18.012816   53550 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:54:18.012919   53550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:54:18.070601   53550 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 08:54:18.060838365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:54:18.071017   53550 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 08:54:18.071040   53550 cni.go:84] Creating CNI manager for ""
	I1213 08:54:18.071105   53550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:54:18.071147   53550 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:54:18.074420   53550 out.go:179] * Starting "functional-074420" primary control-plane node in "functional-074420" cluster
	I1213 08:54:18.077242   53550 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 08:54:18.080227   53550 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:54:18.083176   53550 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:54:18.083216   53550 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 08:54:18.083225   53550 cache.go:65] Caching tarball of preloaded images
	I1213 08:54:18.083262   53550 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:54:18.083328   53550 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 08:54:18.083337   53550 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 08:54:18.083454   53550 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
	I1213 08:54:18.104039   53550 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 08:54:18.104049   53550 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 08:54:18.104071   53550 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:54:18.104097   53550 start.go:360] acquireMachinesLock for functional-074420: {Name:mk9a8356bf81e58530d2c2996b4da0b7487171c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:54:18.104173   53550 start.go:364] duration metric: took 60.013µs to acquireMachinesLock for "functional-074420"
	I1213 08:54:18.104193   53550 start.go:96] Skipping create...Using existing machine configuration
	I1213 08:54:18.104198   53550 fix.go:54] fixHost starting: 
	I1213 08:54:18.104469   53550 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
	I1213 08:54:18.121469   53550 fix.go:112] recreateIfNeeded on functional-074420: state=Running err=<nil>
	W1213 08:54:18.121489   53550 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 08:54:18.124664   53550 out.go:252] * Updating the running docker "functional-074420" container ...
	I1213 08:54:18.124700   53550 machine.go:94] provisionDockerMachine start ...
	I1213 08:54:18.124779   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.142221   53550 main.go:143] libmachine: Using SSH client type: native
	I1213 08:54:18.142535   53550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:54:18.142542   53550 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:54:18.290889   53550 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:54:18.290902   53550 ubuntu.go:182] provisioning hostname "functional-074420"
	I1213 08:54:18.290965   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.308398   53550 main.go:143] libmachine: Using SSH client type: native
	I1213 08:54:18.308699   53550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:54:18.308706   53550 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-074420 && echo "functional-074420" | sudo tee /etc/hostname
	I1213 08:54:18.463898   53550 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
	
	I1213 08:54:18.463977   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.481808   53550 main.go:143] libmachine: Using SSH client type: native
	I1213 08:54:18.482113   53550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1213 08:54:18.482128   53550 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-074420' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-074420/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-074420' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:54:18.639897   53550 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:54:18.639913   53550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 08:54:18.639945   53550 ubuntu.go:190] setting up certificates
	I1213 08:54:18.639960   53550 provision.go:84] configureAuth start
	I1213 08:54:18.640021   53550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:54:18.657069   53550 provision.go:143] copyHostCerts
	I1213 08:54:18.657137   53550 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 08:54:18.657145   53550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 08:54:18.657224   53550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 08:54:18.657317   53550 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 08:54:18.657321   53550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 08:54:18.657345   53550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 08:54:18.657393   53550 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 08:54:18.657396   53550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 08:54:18.657421   53550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 08:54:18.657462   53550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.functional-074420 san=[127.0.0.1 192.168.49.2 functional-074420 localhost minikube]
	I1213 08:54:18.978851   53550 provision.go:177] copyRemoteCerts
	I1213 08:54:18.978913   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:54:18.978954   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:18.996497   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.099309   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:54:19.116489   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 08:54:19.134491   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 08:54:19.152584   53550 provision.go:87] duration metric: took 512.603195ms to configureAuth
	I1213 08:54:19.152601   53550 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:54:19.152798   53550 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 08:54:19.152804   53550 machine.go:97] duration metric: took 1.028099835s to provisionDockerMachine
	I1213 08:54:19.152810   53550 start.go:293] postStartSetup for "functional-074420" (driver="docker")
	I1213 08:54:19.152820   53550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:54:19.152868   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:54:19.152914   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.170238   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.275637   53550 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:54:19.280193   53550 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:54:19.280211   53550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:54:19.280223   53550 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 08:54:19.280276   53550 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 08:54:19.280348   53550 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 08:54:19.280419   53550 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
	I1213 08:54:19.280458   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
	I1213 08:54:19.288420   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:54:19.306689   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
	I1213 08:54:19.324595   53550 start.go:296] duration metric: took 171.770829ms for postStartSetup
	I1213 08:54:19.324673   53550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:54:19.324742   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.347206   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.449063   53550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:54:19.453865   53550 fix.go:56] duration metric: took 1.349660427s for fixHost
	I1213 08:54:19.453881   53550 start.go:83] releasing machines lock for "functional-074420", held for 1.349700469s
	I1213 08:54:19.453945   53550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
	I1213 08:54:19.471349   53550 ssh_runner.go:195] Run: cat /version.json
	I1213 08:54:19.471396   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.471420   53550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:54:19.471481   53550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
	I1213 08:54:19.492979   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.505163   53550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
	I1213 08:54:19.686546   53550 ssh_runner.go:195] Run: systemctl --version
	I1213 08:54:19.692986   53550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 08:54:19.697303   53550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:54:19.697365   53550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:54:19.705133   53550 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 08:54:19.705146   53550 start.go:496] detecting cgroup driver to use...
	I1213 08:54:19.705176   53550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 08:54:19.705226   53550 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 08:54:19.720729   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 08:54:19.733460   53550 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:54:19.733514   53550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:54:19.748695   53550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:54:19.761831   53550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:54:19.870034   53550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:54:19.996014   53550 docker.go:234] disabling docker service ...
	I1213 08:54:19.996078   53550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:54:20.014799   53550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:54:20.030104   53550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:54:20.162441   53550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:54:20.283014   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:54:20.297184   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:54:20.311847   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 08:54:20.321141   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 08:54:20.330609   53550 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 08:54:20.330677   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 08:54:20.339444   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:54:20.348072   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 08:54:20.356752   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 08:54:20.365663   53550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:54:20.373861   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 08:54:20.383214   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 08:54:20.392296   53550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 08:54:20.401182   53550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:54:20.408521   53550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:54:20.415857   53550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:54:20.524736   53550 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 08:54:20.667475   53550 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 08:54:20.667553   53550 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 08:54:20.671249   53550 start.go:564] Will wait 60s for crictl version
	I1213 08:54:20.671308   53550 ssh_runner.go:195] Run: which crictl
	I1213 08:54:20.674869   53550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:54:20.699246   53550 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 08:54:20.699301   53550 ssh_runner.go:195] Run: containerd --version
	I1213 08:54:20.723418   53550 ssh_runner.go:195] Run: containerd --version
	I1213 08:54:20.748134   53550 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 08:54:20.751095   53550 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:54:20.766935   53550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 08:54:20.773949   53550 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 08:54:20.776880   53550 kubeadm.go:884] updating cluster {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:54:20.777036   53550 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 08:54:20.777116   53550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:54:20.804622   53550 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:54:20.804634   53550 containerd.go:534] Images already preloaded, skipping extraction
	I1213 08:54:20.804691   53550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:54:20.834431   53550 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 08:54:20.834444   53550 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:54:20.834451   53550 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1213 08:54:20.834559   53550 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-074420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:54:20.834624   53550 ssh_runner.go:195] Run: sudo crictl info
	I1213 08:54:20.867174   53550 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 08:54:20.867192   53550 cni.go:84] Creating CNI manager for ""
	I1213 08:54:20.867200   53550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:54:20.867220   53550 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:54:20.867242   53550 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-074420 NodeName:functional-074420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:54:20.867356   53550 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-074420"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:54:20.867422   53550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 08:54:20.875127   53550 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:54:20.875185   53550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:54:20.882880   53550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 08:54:20.898646   53550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 08:54:20.911841   53550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1213 08:54:20.925067   53550 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:54:20.928972   53550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:54:21.047902   53550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:54:21.521591   53550 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420 for IP: 192.168.49.2
	I1213 08:54:21.521603   53550 certs.go:195] generating shared ca certs ...
	I1213 08:54:21.521617   53550 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:54:21.521756   53550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 08:54:21.521796   53550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 08:54:21.521802   53550 certs.go:257] generating profile certs ...
	I1213 08:54:21.521883   53550 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key
	I1213 08:54:21.521933   53550 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068
	I1213 08:54:21.521973   53550 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key
	I1213 08:54:21.522082   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 08:54:21.522113   53550 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 08:54:21.522120   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:54:21.522146   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 08:54:21.522168   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:54:21.522190   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 08:54:21.522232   53550 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 08:54:21.522796   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:54:21.547463   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:54:21.565502   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:54:21.583029   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 08:54:21.600675   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 08:54:21.617821   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:54:21.634794   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:54:21.652088   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:54:21.669338   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:54:21.685563   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 08:54:21.702834   53550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 08:54:21.719220   53550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:54:21.731588   53550 ssh_runner.go:195] Run: openssl version
	I1213 08:54:21.737357   53550 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.744365   53550 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:54:21.751316   53550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.754910   53550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.754961   53550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:54:21.795815   53550 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:54:21.802933   53550 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.809987   53550 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 08:54:21.817141   53550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.820600   53550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.820668   53550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 08:54:21.861349   53550 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 08:54:21.868464   53550 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.875279   53550 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 08:54:21.882257   53550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.885950   53550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.886012   53550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 08:54:21.927672   53550 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 08:54:21.934830   53550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:54:21.938562   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 08:54:21.979443   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 08:54:22.023588   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 08:54:22.065341   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 08:54:22.106598   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 08:54:22.147410   53550 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 08:54:22.188516   53550 kubeadm.go:401] StartCluster: {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:54:22.188592   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 08:54:22.188655   53550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:54:22.213570   53550 cri.go:89] found id: ""
	I1213 08:54:22.213647   53550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:54:22.221547   53550 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 08:54:22.221555   53550 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 08:54:22.221616   53550 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 08:54:22.229060   53550 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.229555   53550 kubeconfig.go:125] found "functional-074420" server: "https://192.168.49.2:8441"
	I1213 08:54:22.232016   53550 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 08:54:22.239904   53550 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 08:39:47.751417218 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 08:54:20.919594824 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 08:54:22.239924   53550 kubeadm.go:1161] stopping kube-system containers ...
	I1213 08:54:22.239936   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 08:54:22.239998   53550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:54:22.266484   53550 cri.go:89] found id: ""
	I1213 08:54:22.266565   53550 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 08:54:22.285823   53550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:54:22.293457   53550 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 13 08:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 08:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 13 08:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 08:43 /etc/kubernetes/scheduler.conf
	
	I1213 08:54:22.293536   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 08:54:22.301460   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 08:54:22.308894   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.308947   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:54:22.316083   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 08:54:22.323905   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.323959   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:54:22.331273   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 08:54:22.338736   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:54:22.338789   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:54:22.346320   53550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:54:22.354109   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:22.400461   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.430760   53550 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.030276983s)
	I1213 08:54:24.430822   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.648055   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.718708   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 08:54:24.760609   53550 api_server.go:52] waiting for apiserver process to appear ...
	I1213 08:54:24.760672   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:25.261709   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:25.761435   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:26.261759   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:26.761732   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:27.260880   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:27.760874   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:28.261721   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:28.761493   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:29.260874   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:29.761189   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:30.260883   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:30.761448   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:31.260872   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:31.761568   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:32.260967   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:32.760840   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:33.261542   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:33.761383   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:34.261771   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:34.761647   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:35.260857   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:35.760860   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:36.261572   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:36.761127   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:37.260746   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:37.760824   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:38.261446   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:38.760828   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:39.261574   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:39.760780   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:40.261697   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:40.760839   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:41.261384   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:41.761710   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:42.261116   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:42.761031   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:43.260886   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:43.760843   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:44.261147   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:44.761415   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:45.260979   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:45.761106   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:46.261523   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:46.760830   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:47.261517   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:47.760776   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:48.261547   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:48.761373   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:49.260826   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:49.761574   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:50.261136   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:50.761757   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:51.261197   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:51.761646   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:52.261300   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:52.761696   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:53.260864   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:53.761574   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:54.260768   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:54.761242   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:55.261350   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:55.761559   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:56.261198   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:56.761477   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:57.261567   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:57.760861   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:58.261803   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:58.760843   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:59.260868   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:54:59.761676   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:00.261517   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:00.761052   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:01.260802   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:01.760882   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:02.260924   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:02.760742   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:03.261542   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:03.761112   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:04.260813   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:04.761741   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:05.261225   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:05.760863   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:06.261426   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:06.761616   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:07.260888   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:07.760944   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:08.260874   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:08.761422   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:09.261767   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:09.761735   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:10.261376   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:10.760871   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:11.261761   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:11.761199   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:12.260928   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:12.761700   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:13.261570   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:13.761185   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:14.261662   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:14.760883   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:15.260866   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:15.761804   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:16.261789   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:16.761363   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:17.260776   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:17.761086   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:18.261288   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:18.760851   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:19.261191   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:19.761422   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:20.261577   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:20.761202   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:21.260768   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:21.761795   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:22.260945   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:22.761690   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:23.260860   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:23.761430   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:24.261657   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:24.761756   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:24.761828   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:24.786217   53550 cri.go:89] found id: ""
	I1213 08:55:24.786236   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.786243   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:24.786249   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:24.786328   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:24.809104   53550 cri.go:89] found id: ""
	I1213 08:55:24.809118   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.809125   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:24.809130   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:24.809187   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:24.832861   53550 cri.go:89] found id: ""
	I1213 08:55:24.832880   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.832887   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:24.832892   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:24.832949   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:24.856552   53550 cri.go:89] found id: ""
	I1213 08:55:24.856566   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.856573   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:24.856578   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:24.856634   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:24.879617   53550 cri.go:89] found id: ""
	I1213 08:55:24.879631   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.879638   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:24.879643   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:24.879700   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:24.905506   53550 cri.go:89] found id: ""
	I1213 08:55:24.905520   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.905526   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:24.905532   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:24.905588   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:24.930567   53550 cri.go:89] found id: ""
	I1213 08:55:24.930581   53550 logs.go:282] 0 containers: []
	W1213 08:55:24.930587   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:24.930595   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:24.930605   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:24.961663   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:24.961679   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:25.017689   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:25.017709   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:25.035228   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:25.035257   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:25.112728   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:25.103316   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.103954   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.105905   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.106753   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.108540   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:25.103316   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.103954   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.105905   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.106753   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:25.108540   10768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:25.112738   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:25.112750   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:27.676671   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:27.686646   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:27.686705   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:27.710449   53550 cri.go:89] found id: ""
	I1213 08:55:27.710462   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.710469   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:27.710474   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:27.710531   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:27.734910   53550 cri.go:89] found id: ""
	I1213 08:55:27.734923   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.734943   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:27.734949   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:27.735007   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:27.762767   53550 cri.go:89] found id: ""
	I1213 08:55:27.762787   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.762794   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:27.762799   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:27.762853   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:27.789263   53550 cri.go:89] found id: ""
	I1213 08:55:27.789282   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.789288   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:27.789293   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:27.789352   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:27.817361   53550 cri.go:89] found id: ""
	I1213 08:55:27.817374   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.817381   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:27.817386   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:27.817444   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:27.841034   53550 cri.go:89] found id: ""
	I1213 08:55:27.841047   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.841054   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:27.841059   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:27.841114   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:27.865949   53550 cri.go:89] found id: ""
	I1213 08:55:27.865963   53550 logs.go:282] 0 containers: []
	W1213 08:55:27.865970   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:27.865978   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:27.865988   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:27.921352   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:27.921372   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:27.934950   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:27.934966   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:28.012009   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:27.997838   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:27.998688   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.000757   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.005522   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.006421   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:27.997838   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:27.998688   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.000757   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.005522   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:28.006421   10855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:28.012023   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:28.012036   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:28.081214   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:28.081231   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:30.614736   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:30.624755   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:30.624816   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:30.650174   53550 cri.go:89] found id: ""
	I1213 08:55:30.650188   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.650195   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:30.650200   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:30.650257   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:30.675572   53550 cri.go:89] found id: ""
	I1213 08:55:30.675585   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.675592   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:30.675597   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:30.675661   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:30.700274   53550 cri.go:89] found id: ""
	I1213 08:55:30.700288   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.700295   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:30.700301   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:30.700357   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:30.724242   53550 cri.go:89] found id: ""
	I1213 08:55:30.724255   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.724262   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:30.724267   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:30.724322   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:30.749004   53550 cri.go:89] found id: ""
	I1213 08:55:30.749018   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.749025   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:30.749029   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:30.749091   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:30.772837   53550 cri.go:89] found id: ""
	I1213 08:55:30.772850   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.772857   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:30.772862   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:30.772917   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:30.796329   53550 cri.go:89] found id: ""
	I1213 08:55:30.796343   53550 logs.go:282] 0 containers: []
	W1213 08:55:30.796350   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:30.796358   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:30.796369   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:30.806800   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:30.806816   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:30.869919   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:30.861579   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.861934   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.863644   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.864162   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.865728   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:30.861579   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.861934   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.863644   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.864162   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:30.865728   10959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:30.869929   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:30.869939   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:30.936472   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:30.936496   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:30.965152   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:30.965167   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:33.525938   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:33.536142   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:33.536204   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:33.560265   53550 cri.go:89] found id: ""
	I1213 08:55:33.560279   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.560286   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:33.560291   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:33.560346   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:33.584181   53550 cri.go:89] found id: ""
	I1213 08:55:33.584194   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.584201   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:33.584206   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:33.584261   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:33.612544   53550 cri.go:89] found id: ""
	I1213 08:55:33.612558   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.612566   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:33.612571   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:33.612628   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:33.636515   53550 cri.go:89] found id: ""
	I1213 08:55:33.636529   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.636536   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:33.636541   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:33.636601   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:33.661822   53550 cri.go:89] found id: ""
	I1213 08:55:33.661835   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.661842   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:33.661847   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:33.661909   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:33.694727   53550 cri.go:89] found id: ""
	I1213 08:55:33.694741   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.694748   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:33.694753   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:33.694812   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:33.721852   53550 cri.go:89] found id: ""
	I1213 08:55:33.721866   53550 logs.go:282] 0 containers: []
	W1213 08:55:33.721873   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:33.721882   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:33.721892   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:33.789428   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:33.780794   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.781535   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783102   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783765   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.785318   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:33.780794   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.781535   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783102   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.783765   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:33.785318   11062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:33.789438   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:33.789448   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:33.851847   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:33.851865   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:33.879583   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:33.879599   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:33.937089   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:33.937108   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:36.449743   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:36.459975   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:36.460040   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:36.485035   53550 cri.go:89] found id: ""
	I1213 08:55:36.485048   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.485055   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:36.485060   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:36.485116   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:36.509956   53550 cri.go:89] found id: ""
	I1213 08:55:36.509970   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.509977   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:36.509983   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:36.510040   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:36.535929   53550 cri.go:89] found id: ""
	I1213 08:55:36.535942   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.535949   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:36.535954   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:36.536014   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:36.560722   53550 cri.go:89] found id: ""
	I1213 08:55:36.560735   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.560742   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:36.560747   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:36.560818   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:36.586434   53550 cri.go:89] found id: ""
	I1213 08:55:36.586448   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.586455   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:36.586459   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:36.586517   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:36.615482   53550 cri.go:89] found id: ""
	I1213 08:55:36.615506   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.615531   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:36.615536   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:36.615611   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:36.642408   53550 cri.go:89] found id: ""
	I1213 08:55:36.642422   53550 logs.go:282] 0 containers: []
	W1213 08:55:36.642439   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:36.642446   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:36.642457   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:36.669924   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:36.669946   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:36.728697   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:36.728717   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:36.740739   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:36.740759   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:36.807194   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:36.798750   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.799446   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.801401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.802010   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.803502   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:36.798750   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.799446   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.801401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.802010   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:36.803502   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:36.807204   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:36.807218   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:39.369875   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:39.380141   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:39.380202   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:39.407846   53550 cri.go:89] found id: ""
	I1213 08:55:39.407859   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.407867   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:39.407872   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:39.407929   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:39.432500   53550 cri.go:89] found id: ""
	I1213 08:55:39.432514   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.432520   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:39.432525   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:39.432584   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:39.457872   53550 cri.go:89] found id: ""
	I1213 08:55:39.457886   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.457893   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:39.457898   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:39.457961   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:39.483359   53550 cri.go:89] found id: ""
	I1213 08:55:39.483373   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.483379   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:39.483384   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:39.483458   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:39.508786   53550 cri.go:89] found id: ""
	I1213 08:55:39.508800   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.508807   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:39.508812   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:39.508879   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:39.533162   53550 cri.go:89] found id: ""
	I1213 08:55:39.533177   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.533184   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:39.533189   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:39.533247   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:39.558039   53550 cri.go:89] found id: ""
	I1213 08:55:39.558052   53550 logs.go:282] 0 containers: []
	W1213 08:55:39.558059   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:39.558067   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:39.558076   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:39.618400   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:39.618423   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:39.629575   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:39.629592   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:39.694333   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:39.686653   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.687216   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.688669   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.689084   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.690548   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:39.686653   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.687216   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.688669   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.689084   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:39.690548   11275 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:39.694344   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:39.694355   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:39.757320   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:39.757338   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:42.285019   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:42.297179   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:42.297241   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:42.328576   53550 cri.go:89] found id: ""
	I1213 08:55:42.328589   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.328611   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:42.328616   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:42.328678   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:42.356055   53550 cri.go:89] found id: ""
	I1213 08:55:42.356069   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.356077   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:42.356082   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:42.356141   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:42.380770   53550 cri.go:89] found id: ""
	I1213 08:55:42.380783   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.380790   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:42.380796   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:42.380866   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:42.409446   53550 cri.go:89] found id: ""
	I1213 08:55:42.409460   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.409466   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:42.409471   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:42.409530   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:42.433502   53550 cri.go:89] found id: ""
	I1213 08:55:42.433515   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.433522   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:42.433527   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:42.433583   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:42.458312   53550 cri.go:89] found id: ""
	I1213 08:55:42.458325   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.458336   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:42.458341   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:42.458401   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:42.482681   53550 cri.go:89] found id: ""
	I1213 08:55:42.482694   53550 logs.go:282] 0 containers: []
	W1213 08:55:42.482702   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:42.482709   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:42.482719   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:42.544167   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:42.544185   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:42.572064   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:42.572079   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:42.629874   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:42.629892   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:42.641069   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:42.641084   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:42.704996   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:42.696662   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.697320   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.698873   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.699442   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.701188   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:42.696662   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.697320   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.698873   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.699442   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:42.701188   11392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:45.206980   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:45.225798   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:45.225900   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:45.260556   53550 cri.go:89] found id: ""
	I1213 08:55:45.260579   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.260586   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:45.260592   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:45.260660   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:45.300170   53550 cri.go:89] found id: ""
	I1213 08:55:45.300183   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.300190   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:45.300195   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:45.300253   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:45.335036   53550 cri.go:89] found id: ""
	I1213 08:55:45.335050   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.335057   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:45.335062   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:45.335123   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:45.366574   53550 cri.go:89] found id: ""
	I1213 08:55:45.366587   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.366594   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:45.366599   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:45.366659   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:45.391767   53550 cri.go:89] found id: ""
	I1213 08:55:45.391781   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.391788   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:45.391793   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:45.391850   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:45.416855   53550 cri.go:89] found id: ""
	I1213 08:55:45.416869   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.416876   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:45.416882   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:45.416941   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:45.441837   53550 cri.go:89] found id: ""
	I1213 08:55:45.441859   53550 logs.go:282] 0 containers: []
	W1213 08:55:45.441867   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:45.441875   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:45.441885   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:45.499186   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:45.499203   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:45.510383   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:45.510401   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:45.577305   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:45.568662   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.569333   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.570928   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.571293   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.572858   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:45.568662   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.569333   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.570928   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.571293   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:45.572858   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:45.577329   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:45.577340   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:45.639739   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:45.639761   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:48.174772   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:48.185188   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:48.185250   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:48.210180   53550 cri.go:89] found id: ""
	I1213 08:55:48.210194   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.210200   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:48.210205   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:48.210268   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:48.235002   53550 cri.go:89] found id: ""
	I1213 08:55:48.235015   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.235022   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:48.235027   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:48.235085   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:48.259922   53550 cri.go:89] found id: ""
	I1213 08:55:48.259936   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.259943   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:48.259948   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:48.260007   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:48.304590   53550 cri.go:89] found id: ""
	I1213 08:55:48.304605   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.304611   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:48.304617   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:48.304675   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:48.342676   53550 cri.go:89] found id: ""
	I1213 08:55:48.342690   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.342697   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:48.342703   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:48.342759   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:48.366578   53550 cri.go:89] found id: ""
	I1213 08:55:48.366592   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.366599   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:48.366604   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:48.366673   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:48.391059   53550 cri.go:89] found id: ""
	I1213 08:55:48.391073   53550 logs.go:282] 0 containers: []
	W1213 08:55:48.391080   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:48.391089   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:48.391099   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:48.462962   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:48.454137   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.455049   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.456663   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.457133   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.458628   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:48.454137   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.455049   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.456663   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.457133   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:48.458628   11581 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:48.462973   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:48.462988   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:48.526213   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:48.526232   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:48.556890   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:48.556905   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:48.613408   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:48.613425   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:51.124505   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:51.134928   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:51.134985   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:51.159141   53550 cri.go:89] found id: ""
	I1213 08:55:51.159154   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.159161   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:51.159166   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:51.159222   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:51.182690   53550 cri.go:89] found id: ""
	I1213 08:55:51.182704   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.182711   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:51.182716   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:51.182773   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:51.208685   53550 cri.go:89] found id: ""
	I1213 08:55:51.208698   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.208705   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:51.208710   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:51.208766   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:51.233183   53550 cri.go:89] found id: ""
	I1213 08:55:51.233197   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.233204   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:51.233209   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:51.233270   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:51.258042   53550 cri.go:89] found id: ""
	I1213 08:55:51.258069   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.258076   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:51.258081   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:51.258147   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:51.290467   53550 cri.go:89] found id: ""
	I1213 08:55:51.290481   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.290488   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:51.290495   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:51.290566   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:51.323202   53550 cri.go:89] found id: ""
	I1213 08:55:51.323216   53550 logs.go:282] 0 containers: []
	W1213 08:55:51.323223   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:51.323231   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:51.323240   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:51.394188   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:51.394206   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:51.426214   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:51.426230   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:51.485838   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:51.485855   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:51.496565   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:51.496580   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:51.576933   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:51.567900   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.568559   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570248   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570845   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.572406   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:51.567900   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.568559   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570248   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.570845   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:51.572406   11705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:54.077204   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:54.087800   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:54.087874   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:54.113040   53550 cri.go:89] found id: ""
	I1213 08:55:54.113055   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.113062   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:54.113067   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:54.113124   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:54.138822   53550 cri.go:89] found id: ""
	I1213 08:55:54.138835   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.138842   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:54.138847   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:54.138906   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:54.163439   53550 cri.go:89] found id: ""
	I1213 08:55:54.163452   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.163459   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:54.163465   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:54.163557   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:54.188125   53550 cri.go:89] found id: ""
	I1213 08:55:54.188138   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.188145   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:54.188152   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:54.188208   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:54.212893   53550 cri.go:89] found id: ""
	I1213 08:55:54.212907   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.212914   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:54.212920   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:54.212981   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:54.237373   53550 cri.go:89] found id: ""
	I1213 08:55:54.237386   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.237393   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:54.237399   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:54.237459   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:54.265504   53550 cri.go:89] found id: ""
	I1213 08:55:54.265518   53550 logs.go:282] 0 containers: []
	W1213 08:55:54.265525   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:54.265532   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:54.265542   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:54.333125   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:54.333143   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:54.347402   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:54.347418   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:54.412166   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:54.403644   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.404415   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406100   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406397   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.408104   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:54.403644   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.404415   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406100   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.406397   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:54.408104   11797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:54.412175   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:54.412187   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:54.480709   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:54.480730   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:57.010334   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:57.021059   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:57.021120   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:57.047281   53550 cri.go:89] found id: ""
	I1213 08:55:57.047294   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.047301   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:57.047306   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:57.047377   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:55:57.071416   53550 cri.go:89] found id: ""
	I1213 08:55:57.071429   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.071436   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:55:57.071441   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:55:57.071498   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:55:57.101079   53550 cri.go:89] found id: ""
	I1213 08:55:57.101092   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.101104   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:55:57.101110   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:55:57.101166   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:55:57.125577   53550 cri.go:89] found id: ""
	I1213 08:55:57.125591   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.125598   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:55:57.125603   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:55:57.125664   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:55:57.150869   53550 cri.go:89] found id: ""
	I1213 08:55:57.150883   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.150890   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:55:57.150895   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:55:57.150952   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:55:57.175181   53550 cri.go:89] found id: ""
	I1213 08:55:57.175196   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.175203   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:55:57.175209   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:55:57.175265   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:55:57.201951   53550 cri.go:89] found id: ""
	I1213 08:55:57.201964   53550 logs.go:282] 0 containers: []
	W1213 08:55:57.201981   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:55:57.201989   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:55:57.202000   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:55:57.230175   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:55:57.230191   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:55:57.289371   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:55:57.289389   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:55:57.301801   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:55:57.301816   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:55:57.376259   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:55:57.367385   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.368081   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.369821   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.370486   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.372258   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:55:57.367385   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.368081   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.369821   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.370486   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:55:57.372258   11910 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:55:57.376279   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:55:57.376290   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:55:59.938203   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:59.948941   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:55:59.949015   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:55:59.975053   53550 cri.go:89] found id: ""
	I1213 08:55:59.975067   53550 logs.go:282] 0 containers: []
	W1213 08:55:59.975074   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:55:59.975079   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:55:59.975140   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:00.036168   53550 cri.go:89] found id: ""
	I1213 08:56:00.036184   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.036198   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:00.036204   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:00.036272   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:00.212433   53550 cri.go:89] found id: ""
	I1213 08:56:00.212448   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.212457   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:00.212463   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:00.212534   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:00.329892   53550 cri.go:89] found id: ""
	I1213 08:56:00.329922   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.329931   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:00.329937   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:00.330147   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:00.418357   53550 cri.go:89] found id: ""
	I1213 08:56:00.418382   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.418390   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:00.418395   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:00.418485   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:00.472022   53550 cri.go:89] found id: ""
	I1213 08:56:00.472038   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.472057   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:00.472063   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:00.472147   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:00.501778   53550 cri.go:89] found id: ""
	I1213 08:56:00.501793   53550 logs.go:282] 0 containers: []
	W1213 08:56:00.501800   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:00.501809   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:00.501821   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:00.514889   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:00.514908   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:00.586263   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:00.576477   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.577506   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.579365   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.580284   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.582096   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:00.576477   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.577506   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.579365   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.580284   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:00.582096   12003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:00.586275   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:00.586286   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:00.651709   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:00.651729   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:00.679944   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:00.679961   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:03.240030   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:03.250487   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:03.250564   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:03.276985   53550 cri.go:89] found id: ""
	I1213 08:56:03.276999   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.277006   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:03.277011   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:03.277079   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:03.305874   53550 cri.go:89] found id: ""
	I1213 08:56:03.305887   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.305894   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:03.305900   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:03.305961   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:03.332792   53550 cri.go:89] found id: ""
	I1213 08:56:03.332805   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.332812   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:03.332817   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:03.332875   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:03.359327   53550 cri.go:89] found id: ""
	I1213 08:56:03.359340   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.359347   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:03.359352   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:03.359414   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:03.383789   53550 cri.go:89] found id: ""
	I1213 08:56:03.383802   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.383818   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:03.383823   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:03.383881   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:03.409294   53550 cri.go:89] found id: ""
	I1213 08:56:03.409308   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.409315   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:03.409320   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:03.409380   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:03.433579   53550 cri.go:89] found id: ""
	I1213 08:56:03.433593   53550 logs.go:282] 0 containers: []
	W1213 08:56:03.433600   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:03.433608   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:03.433620   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:03.444272   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:03.444288   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:03.513583   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:03.505953   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.506372   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507548   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507861   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.509300   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:03.505953   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.506372   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507548   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.507861   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:03.509300   12108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:03.513594   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:03.513605   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:03.576629   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:03.576649   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:03.608162   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:03.608178   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:06.165156   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:06.175029   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:06.175086   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:06.199548   53550 cri.go:89] found id: ""
	I1213 08:56:06.199561   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.199567   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:06.199573   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:06.199630   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:06.223345   53550 cri.go:89] found id: ""
	I1213 08:56:06.223358   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.223365   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:06.223370   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:06.223427   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:06.253772   53550 cri.go:89] found id: ""
	I1213 08:56:06.253785   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.253792   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:06.253797   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:06.253862   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:06.285197   53550 cri.go:89] found id: ""
	I1213 08:56:06.285209   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.285216   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:06.285221   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:06.285287   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:06.311117   53550 cri.go:89] found id: ""
	I1213 08:56:06.311130   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.311137   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:06.311142   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:06.311199   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:06.347101   53550 cri.go:89] found id: ""
	I1213 08:56:06.347115   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.347121   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:06.347134   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:06.347212   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:06.373093   53550 cri.go:89] found id: ""
	I1213 08:56:06.373106   53550 logs.go:282] 0 containers: []
	W1213 08:56:06.373113   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:06.373121   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:06.373131   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:06.432261   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:06.432286   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:06.443840   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:06.443858   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:06.510711   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:06.501971   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.502684   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.504393   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.505195   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.506872   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:06.501971   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.502684   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.504393   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.505195   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:06.506872   12216 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:06.510722   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:06.510745   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:06.572342   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:06.572360   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:09.099708   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:09.109781   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:09.109837   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:09.134708   53550 cri.go:89] found id: ""
	I1213 08:56:09.134722   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.134729   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:09.134734   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:09.134793   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:09.159277   53550 cri.go:89] found id: ""
	I1213 08:56:09.159291   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.159297   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:09.159302   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:09.159367   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:09.185743   53550 cri.go:89] found id: ""
	I1213 08:56:09.185756   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.185763   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:09.185768   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:09.185827   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:09.209881   53550 cri.go:89] found id: ""
	I1213 08:56:09.209894   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.209901   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:09.209907   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:09.209963   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:09.233078   53550 cri.go:89] found id: ""
	I1213 08:56:09.233091   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.233099   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:09.233104   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:09.233165   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:09.261187   53550 cri.go:89] found id: ""
	I1213 08:56:09.261200   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.261208   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:09.261216   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:09.261274   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:09.303988   53550 cri.go:89] found id: ""
	I1213 08:56:09.304001   53550 logs.go:282] 0 containers: []
	W1213 08:56:09.304008   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:09.304016   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:09.304035   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:09.366963   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:09.366982   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:09.377754   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:09.377770   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:09.445863   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:09.437325   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.437871   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.439698   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.440090   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.442054   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:09.437325   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.437871   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.439698   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.440090   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:09.442054   12320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:09.445873   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:09.445884   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:09.507900   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:09.507918   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:12.036492   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:12.046919   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:12.046978   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:12.071196   53550 cri.go:89] found id: ""
	I1213 08:56:12.071211   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.071218   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:12.071223   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:12.071285   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:12.097508   53550 cri.go:89] found id: ""
	I1213 08:56:12.097522   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.097529   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:12.097534   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:12.097591   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:12.122628   53550 cri.go:89] found id: ""
	I1213 08:56:12.122641   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.122649   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:12.122654   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:12.122714   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:12.147292   53550 cri.go:89] found id: ""
	I1213 08:56:12.147306   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.147313   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:12.147318   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:12.147385   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:12.171601   53550 cri.go:89] found id: ""
	I1213 08:56:12.171615   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.171622   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:12.171629   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:12.171685   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:12.195241   53550 cri.go:89] found id: ""
	I1213 08:56:12.195255   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.195272   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:12.195277   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:12.195332   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:12.220835   53550 cri.go:89] found id: ""
	I1213 08:56:12.220849   53550 logs.go:282] 0 containers: []
	W1213 08:56:12.220866   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:12.220874   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:12.220883   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:12.283214   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:12.283232   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:12.322176   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:12.322192   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:12.382990   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:12.383007   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:12.393976   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:12.393993   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:12.454561   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:12.446899   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.447411   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.448884   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.449202   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.450694   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:12.446899   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.447411   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.448884   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.449202   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:12.450694   12439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:14.956323   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:14.966379   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:14.966439   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:14.992786   53550 cri.go:89] found id: ""
	I1213 08:56:14.992801   53550 logs.go:282] 0 containers: []
	W1213 08:56:14.992807   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:14.992813   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:14.992876   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:15.028638   53550 cri.go:89] found id: ""
	I1213 08:56:15.028653   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.028660   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:15.028666   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:15.028735   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:15.059274   53550 cri.go:89] found id: ""
	I1213 08:56:15.059288   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.059295   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:15.059301   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:15.059408   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:15.089311   53550 cri.go:89] found id: ""
	I1213 08:56:15.089324   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.089331   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:15.089336   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:15.089401   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:15.118691   53550 cri.go:89] found id: ""
	I1213 08:56:15.118705   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.118712   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:15.118717   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:15.118773   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:15.144494   53550 cri.go:89] found id: ""
	I1213 08:56:15.144507   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.144514   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:15.144519   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:15.144577   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:15.173885   53550 cri.go:89] found id: ""
	I1213 08:56:15.173899   53550 logs.go:282] 0 containers: []
	W1213 08:56:15.173905   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:15.173914   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:15.173925   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:15.236112   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:15.228066   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.228792   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230471   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230772   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.232236   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:15.228066   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.228792   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230471   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.230772   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:15.232236   12518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:15.236121   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:15.236134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:15.298113   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:15.298131   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:15.342964   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:15.342980   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:15.400545   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:15.400563   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:17.911444   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:17.921343   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:17.921402   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:17.947826   53550 cri.go:89] found id: ""
	I1213 08:56:17.947840   53550 logs.go:282] 0 containers: []
	W1213 08:56:17.947847   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:17.947852   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:17.947908   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:17.971346   53550 cri.go:89] found id: ""
	I1213 08:56:17.971376   53550 logs.go:282] 0 containers: []
	W1213 08:56:17.971383   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:17.971387   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:17.971449   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:17.999271   53550 cri.go:89] found id: ""
	I1213 08:56:17.999285   53550 logs.go:282] 0 containers: []
	W1213 08:56:17.999292   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:17.999298   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:17.999371   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:18.031971   53550 cri.go:89] found id: ""
	I1213 08:56:18.031984   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.031991   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:18.031996   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:18.032058   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:18.057098   53550 cri.go:89] found id: ""
	I1213 08:56:18.057112   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.057119   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:18.057127   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:18.057187   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:18.081981   53550 cri.go:89] found id: ""
	I1213 08:56:18.082007   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.082014   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:18.082021   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:18.082092   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:18.108138   53550 cri.go:89] found id: ""
	I1213 08:56:18.108152   53550 logs.go:282] 0 containers: []
	W1213 08:56:18.108159   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:18.108166   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:18.108179   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:18.118705   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:18.118723   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:18.182232   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:18.173836   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.174533   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176073   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176393   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.177924   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:18.173836   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.174533   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176073   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.176393   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:18.177924   12630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:18.182242   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:18.182253   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:18.243585   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:18.243606   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:18.292655   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:18.292671   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:20.860353   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:20.870680   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:20.870753   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:20.895485   53550 cri.go:89] found id: ""
	I1213 08:56:20.895499   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.895506   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:20.895532   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:20.895592   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:20.921461   53550 cri.go:89] found id: ""
	I1213 08:56:20.921475   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.921482   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:20.921486   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:20.921545   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:20.946484   53550 cri.go:89] found id: ""
	I1213 08:56:20.946498   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.946507   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:20.946512   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:20.946570   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:20.971723   53550 cri.go:89] found id: ""
	I1213 08:56:20.971737   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.971744   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:20.971749   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:20.971806   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:20.996903   53550 cri.go:89] found id: ""
	I1213 08:56:20.996917   53550 logs.go:282] 0 containers: []
	W1213 08:56:20.996924   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:20.996929   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:20.996987   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:21.025270   53550 cri.go:89] found id: ""
	I1213 08:56:21.025283   53550 logs.go:282] 0 containers: []
	W1213 08:56:21.025290   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:21.025295   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:21.025354   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:21.050984   53550 cri.go:89] found id: ""
	I1213 08:56:21.050998   53550 logs.go:282] 0 containers: []
	W1213 08:56:21.051005   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:21.051013   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:21.051024   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:21.061853   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:21.061867   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:21.130720   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:21.122077   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.122912   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124411   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124796   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.126237   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:21.122077   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.122912   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124411   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.124796   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:21.126237   12735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:21.130741   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:21.130753   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:21.194629   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:21.194647   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:21.222790   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:21.222806   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:23.780448   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:23.790523   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:23.790584   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:23.815703   53550 cri.go:89] found id: ""
	I1213 08:56:23.815717   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.815724   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:23.815729   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:23.815790   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:23.844047   53550 cri.go:89] found id: ""
	I1213 08:56:23.844062   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.844069   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:23.844074   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:23.844132   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:23.868824   53550 cri.go:89] found id: ""
	I1213 08:56:23.868837   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.868844   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:23.868849   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:23.868908   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:23.893054   53550 cri.go:89] found id: ""
	I1213 08:56:23.893067   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.893084   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:23.893089   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:23.893158   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:23.918102   53550 cri.go:89] found id: ""
	I1213 08:56:23.918115   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.918141   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:23.918146   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:23.918221   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:23.943674   53550 cri.go:89] found id: ""
	I1213 08:56:23.943706   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.943713   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:23.943719   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:23.943780   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:23.969229   53550 cri.go:89] found id: ""
	I1213 08:56:23.969242   53550 logs.go:282] 0 containers: []
	W1213 08:56:23.969250   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:23.969258   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:23.969268   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:24.024433   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:24.024452   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:24.036371   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:24.036394   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:24.106333   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:24.097769   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.098607   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.100363   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.101001   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.102517   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:24.097769   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.098607   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.100363   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.101001   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:24.102517   12841 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:24.106343   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:24.106354   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:24.169184   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:24.169204   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:26.698614   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:26.708577   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:26.708633   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:26.732922   53550 cri.go:89] found id: ""
	I1213 08:56:26.732936   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.732943   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:26.732948   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:26.733006   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:26.755987   53550 cri.go:89] found id: ""
	I1213 08:56:26.756000   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.756007   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:26.756012   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:26.756070   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:26.780069   53550 cri.go:89] found id: ""
	I1213 08:56:26.780082   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.780089   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:26.780094   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:26.780152   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:26.803904   53550 cri.go:89] found id: ""
	I1213 08:56:26.803916   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.803923   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:26.803928   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:26.803983   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:26.829092   53550 cri.go:89] found id: ""
	I1213 08:56:26.829106   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.829114   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:26.829119   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:26.829177   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:26.853845   53550 cri.go:89] found id: ""
	I1213 08:56:26.853858   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.853865   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:26.853870   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:26.853925   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:26.878415   53550 cri.go:89] found id: ""
	I1213 08:56:26.878428   53550 logs.go:282] 0 containers: []
	W1213 08:56:26.878435   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:26.878443   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:26.878452   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:26.934265   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:26.934282   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:26.945523   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:26.945543   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:27.018637   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:27.009719   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.010496   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012205   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012866   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.014551   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:27.009719   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.010496   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012205   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.012866   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:27.014551   12943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:27.018647   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:27.018658   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:27.084954   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:27.084972   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:29.613085   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:29.622947   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:29.623004   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:29.646959   53550 cri.go:89] found id: ""
	I1213 08:56:29.646973   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.646980   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:29.646986   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:29.647044   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:29.671745   53550 cri.go:89] found id: ""
	I1213 08:56:29.671759   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.671766   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:29.671771   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:29.671827   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:29.695958   53550 cri.go:89] found id: ""
	I1213 08:56:29.695972   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.695979   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:29.695984   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:29.696042   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:29.720480   53550 cri.go:89] found id: ""
	I1213 08:56:29.720494   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.720501   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:29.720506   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:29.720561   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:29.744988   53550 cri.go:89] found id: ""
	I1213 08:56:29.745001   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.745008   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:29.745013   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:29.745069   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:29.768515   53550 cri.go:89] found id: ""
	I1213 08:56:29.768529   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.768536   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:29.768541   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:29.768600   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:29.792772   53550 cri.go:89] found id: ""
	I1213 08:56:29.792791   53550 logs.go:282] 0 containers: []
	W1213 08:56:29.792798   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:29.792806   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:29.792815   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:29.848125   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:29.848143   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:29.859353   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:29.859369   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:29.922416   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:29.914324   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.914996   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.915957   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.916631   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.918102   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:29.914324   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.914996   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.915957   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.916631   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:29.918102   13046 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:29.922426   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:29.922438   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:29.991606   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:29.991633   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:32.539218   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:32.551358   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:32.551433   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:32.579755   53550 cri.go:89] found id: ""
	I1213 08:56:32.579769   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.579776   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:32.579782   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:32.579840   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:32.606298   53550 cri.go:89] found id: ""
	I1213 08:56:32.606312   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.606319   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:32.606325   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:32.606386   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:32.631992   53550 cri.go:89] found id: ""
	I1213 08:56:32.632006   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.632023   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:32.632028   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:32.632086   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:32.663992   53550 cri.go:89] found id: ""
	I1213 08:56:32.664006   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.664013   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:32.664019   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:32.664079   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:32.688738   53550 cri.go:89] found id: ""
	I1213 08:56:32.688752   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.688759   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:32.688764   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:32.688824   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:32.714559   53550 cri.go:89] found id: ""
	I1213 08:56:32.714573   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.714590   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:32.714596   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:32.714663   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:32.741559   53550 cri.go:89] found id: ""
	I1213 08:56:32.741573   53550 logs.go:282] 0 containers: []
	W1213 08:56:32.741579   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:32.741587   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:32.741597   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:32.800820   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:32.800838   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:32.811825   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:32.811840   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:32.885502   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:32.876782   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.877261   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.879781   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.880103   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.881563   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:32.876782   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.877261   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.879781   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.880103   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:32.881563   13153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:32.885513   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:32.885525   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:32.948272   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:32.948291   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:35.480322   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:35.490281   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:35.490342   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:35.514866   53550 cri.go:89] found id: ""
	I1213 08:56:35.514880   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.514891   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:35.514896   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:35.514956   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:35.547423   53550 cri.go:89] found id: ""
	I1213 08:56:35.547436   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.547443   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:35.547449   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:35.547529   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:35.576485   53550 cri.go:89] found id: ""
	I1213 08:56:35.576499   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.576506   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:35.576511   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:35.576569   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:35.602583   53550 cri.go:89] found id: ""
	I1213 08:56:35.602597   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.602604   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:35.602610   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:35.602671   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:35.628894   53550 cri.go:89] found id: ""
	I1213 08:56:35.628908   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.628915   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:35.628920   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:35.628983   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:35.657754   53550 cri.go:89] found id: ""
	I1213 08:56:35.657768   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.657775   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:35.657780   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:35.657838   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:35.682178   53550 cri.go:89] found id: ""
	I1213 08:56:35.682192   53550 logs.go:282] 0 containers: []
	W1213 08:56:35.682198   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:35.682207   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:35.682218   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:35.692814   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:35.692830   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:35.755108   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:35.747202   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.747982   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749544   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749858   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.751344   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:35.747202   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.747982   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749544   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.749858   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:35.751344   13253 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:35.755119   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:35.755130   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:35.819728   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:35.819749   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:35.848015   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:35.848031   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:38.404654   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:38.414683   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:38.414742   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:38.441126   53550 cri.go:89] found id: ""
	I1213 08:56:38.441140   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.441147   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:38.441152   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:38.441214   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:38.465511   53550 cri.go:89] found id: ""
	I1213 08:56:38.465524   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.465545   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:38.465550   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:38.465606   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:38.489339   53550 cri.go:89] found id: ""
	I1213 08:56:38.489353   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.489359   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:38.489364   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:38.489418   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:38.513685   53550 cri.go:89] found id: ""
	I1213 08:56:38.513699   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.513706   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:38.513711   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:38.513768   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:38.542115   53550 cri.go:89] found id: ""
	I1213 08:56:38.542128   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.542135   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:38.542140   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:38.542204   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:38.569759   53550 cri.go:89] found id: ""
	I1213 08:56:38.569772   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.569778   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:38.569784   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:38.569842   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:38.596740   53550 cri.go:89] found id: ""
	I1213 08:56:38.596754   53550 logs.go:282] 0 containers: []
	W1213 08:56:38.596761   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:38.596769   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:38.596780   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:38.654316   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:38.654335   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:38.665035   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:38.665050   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:38.729308   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:38.720643   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.721501   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723232   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723577   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.725294   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:38.720643   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.721501   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723232   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.723577   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:38.725294   13362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:38.729317   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:38.729330   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:38.790889   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:38.790908   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:41.323859   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:41.335168   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:41.335228   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:41.361007   53550 cri.go:89] found id: ""
	I1213 08:56:41.361021   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.361028   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:41.361033   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:41.361090   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:41.385773   53550 cri.go:89] found id: ""
	I1213 08:56:41.385787   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.385794   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:41.385799   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:41.385857   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:41.415146   53550 cri.go:89] found id: ""
	I1213 08:56:41.415160   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.415174   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:41.415179   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:41.415235   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:41.441108   53550 cri.go:89] found id: ""
	I1213 08:56:41.441122   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.441129   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:41.441134   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:41.441190   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:41.475987   53550 cri.go:89] found id: ""
	I1213 08:56:41.476001   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.476008   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:41.476014   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:41.476073   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:41.499775   53550 cri.go:89] found id: ""
	I1213 08:56:41.499789   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.499796   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:41.499801   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:41.499861   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:41.528901   53550 cri.go:89] found id: ""
	I1213 08:56:41.528914   53550 logs.go:282] 0 containers: []
	W1213 08:56:41.528931   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:41.528939   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:41.528956   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:41.589661   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:41.589678   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:41.602123   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:41.602138   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:41.667706   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:41.659176   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.659929   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.661608   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.662073   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.663743   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:41.659176   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.659929   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.661608   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.662073   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:41.663743   13467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:41.667715   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:41.667735   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:41.730253   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:41.730270   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:44.257671   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:44.269222   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:44.269293   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:44.294398   53550 cri.go:89] found id: ""
	I1213 08:56:44.294412   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.294419   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:44.294423   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:44.294484   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:44.319070   53550 cri.go:89] found id: ""
	I1213 08:56:44.319084   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.319092   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:44.319097   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:44.319155   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:44.343392   53550 cri.go:89] found id: ""
	I1213 08:56:44.343405   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.343420   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:44.343425   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:44.343485   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:44.367894   53550 cri.go:89] found id: ""
	I1213 08:56:44.367909   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.367924   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:44.367929   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:44.367993   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:44.393473   53550 cri.go:89] found id: ""
	I1213 08:56:44.393487   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.393505   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:44.393511   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:44.393579   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:44.419150   53550 cri.go:89] found id: ""
	I1213 08:56:44.419164   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.419171   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:44.419177   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:44.419236   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:44.445826   53550 cri.go:89] found id: ""
	I1213 08:56:44.445839   53550 logs.go:282] 0 containers: []
	W1213 08:56:44.445846   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:44.445854   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:44.445864   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:44.473670   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:44.473686   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:44.532419   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:44.532439   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:44.545059   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:44.545075   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:44.621942   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:44.613047   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.613768   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.615594   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.616292   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.618008   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:44.613047   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.613768   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.615594   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.616292   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:44.618008   13583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:44.621960   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:44.621970   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:47.187660   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:47.197939   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:47.197999   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:47.223309   53550 cri.go:89] found id: ""
	I1213 08:56:47.223328   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.223335   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:47.223341   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:47.223404   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:47.248945   53550 cri.go:89] found id: ""
	I1213 08:56:47.248958   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.248965   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:47.248971   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:47.249030   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:47.277058   53550 cri.go:89] found id: ""
	I1213 08:56:47.277072   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.277079   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:47.277084   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:47.277141   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:47.301116   53550 cri.go:89] found id: ""
	I1213 08:56:47.301130   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.301137   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:47.301151   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:47.301209   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:47.323965   53550 cri.go:89] found id: ""
	I1213 08:56:47.323979   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.323987   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:47.323992   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:47.324050   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:47.348999   53550 cri.go:89] found id: ""
	I1213 08:56:47.349019   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.349027   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:47.349032   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:47.349092   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:47.373783   53550 cri.go:89] found id: ""
	I1213 08:56:47.373797   53550 logs.go:282] 0 containers: []
	W1213 08:56:47.373803   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:47.373811   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:47.373820   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:47.429021   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:47.429039   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:47.439785   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:47.439801   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:47.500829   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:47.492613   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.493257   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.494833   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.495318   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.496858   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:47.492613   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.493257   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.494833   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.495318   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:47.496858   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:47.500840   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:47.500850   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:47.568111   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:47.568130   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:50.110119   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:50.120537   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:50.120602   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:50.148966   53550 cri.go:89] found id: ""
	I1213 08:56:50.148980   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.148986   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:50.148991   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:50.149046   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:50.177907   53550 cri.go:89] found id: ""
	I1213 08:56:50.177921   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.177928   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:50.177933   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:50.177996   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:50.203131   53550 cri.go:89] found id: ""
	I1213 08:56:50.203144   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.203151   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:50.203155   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:50.203262   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:50.226237   53550 cri.go:89] found id: ""
	I1213 08:56:50.226257   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.226264   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:50.226269   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:50.226327   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:50.253758   53550 cri.go:89] found id: ""
	I1213 08:56:50.253773   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.253779   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:50.253784   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:50.253843   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:50.278302   53550 cri.go:89] found id: ""
	I1213 08:56:50.278315   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.278322   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:50.278327   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:50.278392   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:50.309556   53550 cri.go:89] found id: ""
	I1213 08:56:50.309569   53550 logs.go:282] 0 containers: []
	W1213 08:56:50.309576   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:50.309584   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:50.309594   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:50.320066   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:50.320081   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:50.382949   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:50.374268   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.375072   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.376878   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.377528   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.379007   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:50.374268   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.375072   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.376878   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.377528   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:50.379007   13777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:50.382958   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:50.382969   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:50.444351   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:50.444370   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:50.470781   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:50.470797   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:53.028628   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:53.039130   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:53.039200   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:53.063996   53550 cri.go:89] found id: ""
	I1213 08:56:53.064009   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.064015   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:53.064020   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:53.064076   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:53.088275   53550 cri.go:89] found id: ""
	I1213 08:56:53.088289   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.088296   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:53.088300   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:53.088358   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:53.111773   53550 cri.go:89] found id: ""
	I1213 08:56:53.111786   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.111793   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:53.111808   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:53.111887   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:53.137026   53550 cri.go:89] found id: ""
	I1213 08:56:53.137040   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.137046   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:53.137051   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:53.137107   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:53.160335   53550 cri.go:89] found id: ""
	I1213 08:56:53.160349   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.160356   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:53.160361   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:53.160416   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:53.184713   53550 cri.go:89] found id: ""
	I1213 08:56:53.184726   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.184733   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:53.184738   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:53.184795   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:53.208847   53550 cri.go:89] found id: ""
	I1213 08:56:53.208861   53550 logs.go:282] 0 containers: []
	W1213 08:56:53.208868   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:53.208875   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:53.208886   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:53.266985   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:53.267004   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:53.277388   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:53.277404   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:53.340191   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:53.332012   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.332685   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334293   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334789   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.336280   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:53.332012   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.332685   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334293   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.334789   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:53.336280   13881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:53.340200   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:53.340211   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:53.401706   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:53.401724   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:55.928555   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:55.939550   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:55.939616   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:55.965405   53550 cri.go:89] found id: ""
	I1213 08:56:55.965419   53550 logs.go:282] 0 containers: []
	W1213 08:56:55.965426   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:55.965431   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:55.965498   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:55.992150   53550 cri.go:89] found id: ""
	I1213 08:56:55.992164   53550 logs.go:282] 0 containers: []
	W1213 08:56:55.992171   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:55.992175   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:55.992230   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:56.016602   53550 cri.go:89] found id: ""
	I1213 08:56:56.016616   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.016623   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:56.016628   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:56.016689   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:56.042580   53550 cri.go:89] found id: ""
	I1213 08:56:56.042593   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.042600   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:56.042605   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:56.042662   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:56.068761   53550 cri.go:89] found id: ""
	I1213 08:56:56.068775   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.068782   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:56.068787   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:56.068848   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:56.093033   53550 cri.go:89] found id: ""
	I1213 08:56:56.093048   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.093055   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:56.093061   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:56.093126   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:56.117228   53550 cri.go:89] found id: ""
	I1213 08:56:56.117241   53550 logs.go:282] 0 containers: []
	W1213 08:56:56.117248   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:56.117255   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:56.117266   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:56.176992   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:56.177011   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:56.188270   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:56.188285   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:56.253019   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:56.245144   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.245941   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247417   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247852   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.249325   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:56.245144   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.245941   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247417   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.247852   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:56.249325   13987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:56.253029   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:56.253039   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:56.317674   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:56.317696   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:56:58.848619   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:56:58.859053   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:56:58.859112   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:56:58.885409   53550 cri.go:89] found id: ""
	I1213 08:56:58.885423   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.885430   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:56:58.885436   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:56:58.885494   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:56:58.910222   53550 cri.go:89] found id: ""
	I1213 08:56:58.910236   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.910243   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:56:58.910249   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:56:58.910325   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:56:58.934888   53550 cri.go:89] found id: ""
	I1213 08:56:58.934902   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.934909   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:56:58.934914   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:56:58.934973   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:56:58.959400   53550 cri.go:89] found id: ""
	I1213 08:56:58.959413   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.959420   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:56:58.959426   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:56:58.959487   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:56:58.983607   53550 cri.go:89] found id: ""
	I1213 08:56:58.983621   53550 logs.go:282] 0 containers: []
	W1213 08:56:58.983627   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:56:58.983651   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:56:58.983710   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:56:59.013864   53550 cri.go:89] found id: ""
	I1213 08:56:59.013879   53550 logs.go:282] 0 containers: []
	W1213 08:56:59.013886   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:56:59.013892   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:56:59.013953   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:56:59.039411   53550 cri.go:89] found id: ""
	I1213 08:56:59.039425   53550 logs.go:282] 0 containers: []
	W1213 08:56:59.039432   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:56:59.039475   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:56:59.039485   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:56:59.096733   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:56:59.096753   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:56:59.107622   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:56:59.107636   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:56:59.174925   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:56:59.166857   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.167747   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169305   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169619   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.171088   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:56:59.166857   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.167747   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169305   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.169619   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:56:59.171088   14092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:56:59.174934   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:56:59.174947   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:56:59.241043   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:56:59.241063   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:01.772758   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:01.783635   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:01.783701   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:01.809991   53550 cri.go:89] found id: ""
	I1213 08:57:01.810006   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.810012   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:01.810017   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:01.810077   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:01.839186   53550 cri.go:89] found id: ""
	I1213 08:57:01.839200   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.839207   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:01.839212   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:01.839280   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:01.863706   53550 cri.go:89] found id: ""
	I1213 08:57:01.863720   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.863727   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:01.863733   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:01.863802   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:01.888840   53550 cri.go:89] found id: ""
	I1213 08:57:01.888853   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.888866   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:01.888871   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:01.888931   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:01.915920   53550 cri.go:89] found id: ""
	I1213 08:57:01.915933   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.915940   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:01.915944   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:01.916002   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:01.945751   53550 cri.go:89] found id: ""
	I1213 08:57:01.945765   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.945771   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:01.945776   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:01.945845   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:01.970743   53550 cri.go:89] found id: ""
	I1213 08:57:01.970757   53550 logs.go:282] 0 containers: []
	W1213 08:57:01.970765   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:01.970773   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:01.970782   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:02.026866   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:02.026889   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:02.038522   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:02.038539   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:02.102348   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:02.094261   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.095133   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.096614   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.097026   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.098541   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:02.094261   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.095133   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.096614   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.097026   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:02.098541   14197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:02.102361   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:02.102375   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:02.169043   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:02.169063   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:04.696543   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:04.706341   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:04.706437   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:04.731229   53550 cri.go:89] found id: ""
	I1213 08:57:04.731243   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.731250   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:04.731255   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:04.731313   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:04.755649   53550 cri.go:89] found id: ""
	I1213 08:57:04.755664   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.755671   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:04.755675   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:04.755731   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:04.792911   53550 cri.go:89] found id: ""
	I1213 08:57:04.792925   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.792932   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:04.792937   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:04.793004   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:04.819883   53550 cri.go:89] found id: ""
	I1213 08:57:04.819898   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.819905   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:04.819910   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:04.819977   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:04.849837   53550 cri.go:89] found id: ""
	I1213 08:57:04.849851   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.849858   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:04.849863   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:04.849918   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:04.874858   53550 cri.go:89] found id: ""
	I1213 08:57:04.874882   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.874890   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:04.874895   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:04.874960   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:04.903606   53550 cri.go:89] found id: ""
	I1213 08:57:04.903627   53550 logs.go:282] 0 containers: []
	W1213 08:57:04.903634   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:04.903643   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:04.903654   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:04.974645   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:04.965728   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.966550   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968066   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968742   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.970242   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:04.965728   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.966550   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968066   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.968742   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:04.970242   14293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:04.974655   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:04.974665   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:05.042463   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:05.042483   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:05.073448   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:05.073463   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:05.138728   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:05.138751   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:07.650339   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:07.660396   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:07.660456   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:07.683873   53550 cri.go:89] found id: ""
	I1213 08:57:07.683886   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.683893   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:07.683898   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:07.683955   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:07.708331   53550 cri.go:89] found id: ""
	I1213 08:57:07.708345   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.708352   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:07.708357   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:07.708413   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:07.732899   53550 cri.go:89] found id: ""
	I1213 08:57:07.732913   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.732920   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:07.732925   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:07.732984   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:07.757287   53550 cri.go:89] found id: ""
	I1213 08:57:07.757301   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.757308   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:07.757313   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:07.757384   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:07.795374   53550 cri.go:89] found id: ""
	I1213 08:57:07.795387   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.795394   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:07.795399   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:07.795464   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:07.825153   53550 cri.go:89] found id: ""
	I1213 08:57:07.825167   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.825173   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:07.825182   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:07.825237   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:07.852307   53550 cri.go:89] found id: ""
	I1213 08:57:07.852321   53550 logs.go:282] 0 containers: []
	W1213 08:57:07.852327   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:07.852336   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:07.852345   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:07.880059   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:07.880077   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:07.939241   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:07.939258   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:07.949880   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:07.949895   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:08.020565   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:08.011681   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.012563   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014158   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014873   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.016465   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:08.011681   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.012563   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014158   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.014873   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:08.016465   14417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:08.020576   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:08.020587   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:10.587648   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:10.597489   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:10.597549   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:10.628550   53550 cri.go:89] found id: ""
	I1213 08:57:10.628564   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.628571   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:10.628579   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:10.628636   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:10.652715   53550 cri.go:89] found id: ""
	I1213 08:57:10.652728   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.652735   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:10.652740   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:10.652800   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:10.676571   53550 cri.go:89] found id: ""
	I1213 08:57:10.676585   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.676591   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:10.676596   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:10.676656   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:10.701425   53550 cri.go:89] found id: ""
	I1213 08:57:10.701439   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.701446   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:10.701451   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:10.701512   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:10.725031   53550 cri.go:89] found id: ""
	I1213 08:57:10.725044   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.725051   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:10.725056   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:10.725115   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:10.748783   53550 cri.go:89] found id: ""
	I1213 08:57:10.748796   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.748803   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:10.748808   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:10.748865   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:10.782351   53550 cri.go:89] found id: ""
	I1213 08:57:10.782364   53550 logs.go:282] 0 containers: []
	W1213 08:57:10.782371   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:10.782379   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:10.782389   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:10.795735   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:10.795751   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:10.871365   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:10.863538   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.863981   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865416   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865845   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.867361   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:10.863538   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.863981   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865416   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.865845   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:10.867361   14510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:10.871375   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:10.871386   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:10.934169   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:10.934186   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:10.960579   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:10.960595   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:13.522265   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:13.532592   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:13.532651   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:13.557594   53550 cri.go:89] found id: ""
	I1213 08:57:13.557607   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.557614   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:13.557622   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:13.557678   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:13.582015   53550 cri.go:89] found id: ""
	I1213 08:57:13.582029   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.582036   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:13.582041   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:13.582101   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:13.606414   53550 cri.go:89] found id: ""
	I1213 08:57:13.606430   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.606437   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:13.606442   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:13.606501   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:13.633258   53550 cri.go:89] found id: ""
	I1213 08:57:13.633271   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.633278   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:13.633283   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:13.633347   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:13.657138   53550 cri.go:89] found id: ""
	I1213 08:57:13.657151   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.657158   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:13.657163   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:13.657220   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:13.680740   53550 cri.go:89] found id: ""
	I1213 08:57:13.680754   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.680760   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:13.680766   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:13.680821   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:13.704953   53550 cri.go:89] found id: ""
	I1213 08:57:13.704966   53550 logs.go:282] 0 containers: []
	W1213 08:57:13.704973   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:13.704981   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:13.704992   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:13.770673   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:13.760145   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.760885   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762410   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762907   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.764469   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:13.760145   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.760885   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762410   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.762907   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:13.764469   14607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:13.770683   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:13.770696   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:13.840896   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:13.840915   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:13.870203   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:13.870219   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:13.927703   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:13.927721   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:16.440308   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:16.450569   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:16.450632   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:16.477483   53550 cri.go:89] found id: ""
	I1213 08:57:16.477497   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.477503   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:16.477508   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:16.477565   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:16.502333   53550 cri.go:89] found id: ""
	I1213 08:57:16.502347   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.502354   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:16.502369   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:16.502428   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:16.532266   53550 cri.go:89] found id: ""
	I1213 08:57:16.532282   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.532288   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:16.532293   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:16.532350   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:16.560396   53550 cri.go:89] found id: ""
	I1213 08:57:16.560410   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.560417   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:16.560422   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:16.560478   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:16.588855   53550 cri.go:89] found id: ""
	I1213 08:57:16.588868   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.588875   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:16.588881   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:16.588940   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:16.613011   53550 cri.go:89] found id: ""
	I1213 08:57:16.613024   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.613031   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:16.613036   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:16.613093   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:16.637627   53550 cri.go:89] found id: ""
	I1213 08:57:16.637641   53550 logs.go:282] 0 containers: []
	W1213 08:57:16.637648   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:16.637655   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:16.637665   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:16.694489   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:16.694506   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:16.705456   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:16.705471   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:16.774554   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:16.762987   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.763565   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765176   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765796   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.767462   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:16.762987   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.763565   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765176   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.765796   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:16.767462   14718 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:16.774565   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:16.774577   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:16.840799   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:16.840818   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:19.370819   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:19.380996   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:19.381057   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:19.405680   53550 cri.go:89] found id: ""
	I1213 08:57:19.405694   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.405701   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:19.405707   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:19.405765   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:19.434562   53550 cri.go:89] found id: ""
	I1213 08:57:19.434575   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.434583   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:19.434588   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:19.434645   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:19.460752   53550 cri.go:89] found id: ""
	I1213 08:57:19.460765   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.460772   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:19.460777   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:19.460833   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:19.486494   53550 cri.go:89] found id: ""
	I1213 08:57:19.486508   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.486515   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:19.486520   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:19.486580   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:19.515809   53550 cri.go:89] found id: ""
	I1213 08:57:19.515824   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.515830   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:19.515835   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:19.515892   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:19.541206   53550 cri.go:89] found id: ""
	I1213 08:57:19.541219   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.541226   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:19.541231   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:19.541298   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:19.565992   53550 cri.go:89] found id: ""
	I1213 08:57:19.566005   53550 logs.go:282] 0 containers: []
	W1213 08:57:19.566012   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:19.566020   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:19.566030   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:19.593821   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:19.593836   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:19.650142   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:19.650161   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:19.660963   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:19.660978   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:19.726595   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:19.718869   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.719407   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.720893   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.721354   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.722818   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:19.718869   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.719407   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.720893   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.721354   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:19.722818   14833 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:19.726604   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:19.726615   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:22.290630   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:22.300536   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:22.300595   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:22.323650   53550 cri.go:89] found id: ""
	I1213 08:57:22.323663   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.323670   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:22.323675   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:22.323738   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:22.346879   53550 cri.go:89] found id: ""
	I1213 08:57:22.346892   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.346899   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:22.346904   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:22.346958   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:22.370613   53550 cri.go:89] found id: ""
	I1213 08:57:22.370627   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.370633   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:22.370638   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:22.370695   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:22.397037   53550 cri.go:89] found id: ""
	I1213 08:57:22.397051   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.397057   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:22.397062   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:22.397120   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:22.420786   53550 cri.go:89] found id: ""
	I1213 08:57:22.420799   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.420806   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:22.420811   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:22.420873   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:22.445029   53550 cri.go:89] found id: ""
	I1213 08:57:22.445043   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.445050   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:22.445056   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:22.445112   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:22.468674   53550 cri.go:89] found id: ""
	I1213 08:57:22.468688   53550 logs.go:282] 0 containers: []
	W1213 08:57:22.468694   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:22.468702   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:22.468712   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:22.495304   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:22.495322   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:22.552462   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:22.552479   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:22.562826   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:22.562841   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:22.622604   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:22.615033   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.615580   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.616785   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.617360   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.618835   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:22.615033   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.615580   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.616785   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.617360   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:22.618835   14940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:22.622614   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:22.622625   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:25.187376   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:25.197281   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:25.197340   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:25.224829   53550 cri.go:89] found id: ""
	I1213 08:57:25.224843   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.224850   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:25.224855   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:25.224914   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:25.253288   53550 cri.go:89] found id: ""
	I1213 08:57:25.253303   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.253310   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:25.253315   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:25.253371   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:25.277253   53550 cri.go:89] found id: ""
	I1213 08:57:25.277267   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.277274   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:25.277279   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:25.277338   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:25.303815   53550 cri.go:89] found id: ""
	I1213 08:57:25.303828   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.303835   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:25.303840   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:25.303901   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:25.328041   53550 cri.go:89] found id: ""
	I1213 08:57:25.328054   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.328060   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:25.328065   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:25.328123   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:25.356334   53550 cri.go:89] found id: ""
	I1213 08:57:25.356348   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.356355   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:25.356369   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:25.356424   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:25.380096   53550 cri.go:89] found id: ""
	I1213 08:57:25.380110   53550 logs.go:282] 0 containers: []
	W1213 08:57:25.380116   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:25.380124   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:25.380134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:25.439426   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:25.439444   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:25.449905   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:25.449921   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:25.512900   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:25.504371   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.504993   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.506701   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.507271   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.508895   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:25.504371   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.504993   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.506701   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.507271   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:25.508895   15035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:25.512910   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:25.512920   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:25.575756   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:25.575775   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:28.103479   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:28.113820   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:28.113880   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:28.139011   53550 cri.go:89] found id: ""
	I1213 08:57:28.139026   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.139033   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:28.139038   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:28.139097   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:28.169622   53550 cri.go:89] found id: ""
	I1213 08:57:28.169635   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.169642   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:28.169647   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:28.169707   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:28.197421   53550 cri.go:89] found id: ""
	I1213 08:57:28.197436   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.197443   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:28.197448   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:28.197504   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:28.221931   53550 cri.go:89] found id: ""
	I1213 08:57:28.221945   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.221952   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:28.221957   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:28.222019   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:28.245719   53550 cri.go:89] found id: ""
	I1213 08:57:28.245732   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.245739   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:28.245744   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:28.245801   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:28.273087   53550 cri.go:89] found id: ""
	I1213 08:57:28.273101   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.273108   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:28.273113   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:28.273170   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:28.299359   53550 cri.go:89] found id: ""
	I1213 08:57:28.299372   53550 logs.go:282] 0 containers: []
	W1213 08:57:28.299379   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:28.299388   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:28.299398   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:28.355178   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:28.355195   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:28.365905   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:28.365921   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:28.430892   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:28.422372   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.423034   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.424584   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.425047   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.426553   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:28.422372   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.423034   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.424584   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.425047   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:28.426553   15141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:28.430909   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:28.430919   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:28.493985   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:28.494008   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:31.028636   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:31.039540   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:31.039600   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:31.067565   53550 cri.go:89] found id: ""
	I1213 08:57:31.067579   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.067586   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:31.067591   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:31.067649   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:31.103967   53550 cri.go:89] found id: ""
	I1213 08:57:31.103994   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.104001   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:31.104006   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:31.104072   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:31.128428   53550 cri.go:89] found id: ""
	I1213 08:57:31.128455   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.128462   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:31.128467   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:31.128535   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:31.157837   53550 cri.go:89] found id: ""
	I1213 08:57:31.157851   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.157857   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:31.157864   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:31.157920   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:31.182139   53550 cri.go:89] found id: ""
	I1213 08:57:31.182153   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.182160   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:31.182165   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:31.182221   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:31.206203   53550 cri.go:89] found id: ""
	I1213 08:57:31.206217   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.206224   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:31.206229   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:31.206284   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:31.230290   53550 cri.go:89] found id: ""
	I1213 08:57:31.230304   53550 logs.go:282] 0 containers: []
	W1213 08:57:31.230311   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:31.230319   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:31.230335   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:31.240760   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:31.240775   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:31.306114   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:31.297892   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.298559   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300121   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300425   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.301865   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:31.297892   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.298559   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300121   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.300425   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:31.301865   15242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:31.306123   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:31.306134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:31.372771   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:31.372790   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:31.402327   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:31.402342   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:33.959197   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:33.969353   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:33.969420   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:33.994169   53550 cri.go:89] found id: ""
	I1213 08:57:33.994183   53550 logs.go:282] 0 containers: []
	W1213 08:57:33.994190   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:33.994195   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:33.994253   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:34.022338   53550 cri.go:89] found id: ""
	I1213 08:57:34.022367   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.022375   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:34.022380   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:34.022457   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:34.055485   53550 cri.go:89] found id: ""
	I1213 08:57:34.055547   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.055563   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:34.055569   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:34.055645   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:34.085397   53550 cri.go:89] found id: ""
	I1213 08:57:34.085411   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.085419   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:34.085424   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:34.085487   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:34.112540   53550 cri.go:89] found id: ""
	I1213 08:57:34.112553   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.112561   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:34.112566   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:34.112622   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:34.137911   53550 cri.go:89] found id: ""
	I1213 08:57:34.137934   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.137942   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:34.137947   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:34.138013   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:34.165184   53550 cri.go:89] found id: ""
	I1213 08:57:34.165197   53550 logs.go:282] 0 containers: []
	W1213 08:57:34.165204   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:34.165213   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:34.165224   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:34.221937   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:34.221954   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:34.232900   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:34.232925   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:34.299398   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:34.291492   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.291917   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.293598   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.294066   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.295548   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:34.291492   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.291917   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.293598   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.294066   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:34.295548   15346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:34.299409   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:34.299422   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:34.362086   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:34.362104   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:36.894643   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:36.904509   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:36.904571   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:36.928971   53550 cri.go:89] found id: ""
	I1213 08:57:36.928986   53550 logs.go:282] 0 containers: []
	W1213 08:57:36.928993   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:36.928998   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:36.929055   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:36.963924   53550 cri.go:89] found id: ""
	I1213 08:57:36.963938   53550 logs.go:282] 0 containers: []
	W1213 08:57:36.963945   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:36.963956   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:36.964015   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:36.989352   53550 cri.go:89] found id: ""
	I1213 08:57:36.989366   53550 logs.go:282] 0 containers: []
	W1213 08:57:36.989373   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:36.989378   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:36.989435   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:37.022947   53550 cri.go:89] found id: ""
	I1213 08:57:37.022973   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.022982   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:37.022987   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:37.023065   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:37.058627   53550 cri.go:89] found id: ""
	I1213 08:57:37.058642   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.058649   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:37.058654   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:37.058711   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:37.093026   53550 cri.go:89] found id: ""
	I1213 08:57:37.093047   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.093054   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:37.093059   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:37.093127   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:37.119099   53550 cri.go:89] found id: ""
	I1213 08:57:37.119113   53550 logs.go:282] 0 containers: []
	W1213 08:57:37.119120   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:37.119127   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:37.119142   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:37.129746   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:37.129770   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:37.192251   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:37.184204   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.185001   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186648   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186953   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.188434   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:37.184204   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.185001   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186648   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.186953   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:37.188434   15450 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:37.192263   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:37.192274   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:37.258678   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:37.258697   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:37.286406   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:37.286421   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:39.843274   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:39.853155   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:39.853221   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:39.880613   53550 cri.go:89] found id: ""
	I1213 08:57:39.880627   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.880634   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:39.880639   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:39.880695   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:39.908166   53550 cri.go:89] found id: ""
	I1213 08:57:39.908179   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.908191   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:39.908197   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:39.908255   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:39.931780   53550 cri.go:89] found id: ""
	I1213 08:57:39.931803   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.931811   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:39.931816   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:39.931885   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:39.959597   53550 cri.go:89] found id: ""
	I1213 08:57:39.959610   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.959617   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:39.959622   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:39.959678   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:39.987876   53550 cri.go:89] found id: ""
	I1213 08:57:39.987889   53550 logs.go:282] 0 containers: []
	W1213 08:57:39.987896   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:39.987901   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:39.987955   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:40.032588   53550 cri.go:89] found id: ""
	I1213 08:57:40.032603   53550 logs.go:282] 0 containers: []
	W1213 08:57:40.032610   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:40.032615   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:40.032675   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:40.061908   53550 cri.go:89] found id: ""
	I1213 08:57:40.061922   53550 logs.go:282] 0 containers: []
	W1213 08:57:40.061929   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:40.061937   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:40.061947   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:40.126971   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:40.126990   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:40.143091   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:40.143107   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:40.207107   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:40.198855   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.199548   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201367   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201930   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.203470   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:40.198855   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.199548   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201367   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.201930   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:40.203470   15552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:40.207117   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:40.207127   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:40.276818   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:40.276842   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:42.806068   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:42.816147   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:42.816212   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:42.844268   53550 cri.go:89] found id: ""
	I1213 08:57:42.844281   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.844288   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:42.844294   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:42.844353   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:42.869114   53550 cri.go:89] found id: ""
	I1213 08:57:42.869127   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.869134   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:42.869139   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:42.869195   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:42.892972   53550 cri.go:89] found id: ""
	I1213 08:57:42.892986   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.892993   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:42.892998   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:42.893072   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:42.916620   53550 cri.go:89] found id: ""
	I1213 08:57:42.916633   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.916640   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:42.916646   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:42.916702   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:42.940313   53550 cri.go:89] found id: ""
	I1213 08:57:42.940327   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.940334   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:42.940339   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:42.940394   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:42.965365   53550 cri.go:89] found id: ""
	I1213 08:57:42.965379   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.965386   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:42.965391   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:42.965451   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:42.990702   53550 cri.go:89] found id: ""
	I1213 08:57:42.990715   53550 logs.go:282] 0 containers: []
	W1213 08:57:42.990722   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:42.990729   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:42.990742   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:43.048989   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:43.049008   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:43.061818   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:43.061836   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:43.129375   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:43.121071   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.121717   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.123392   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.124082   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.125650   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:43.121071   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.121717   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.123392   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.124082   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:43.125650   15659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:43.129386   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:43.129396   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:43.191354   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:43.191373   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:45.723775   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:45.733853   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:45.733913   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:45.765626   53550 cri.go:89] found id: ""
	I1213 08:57:45.765639   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.765646   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:45.765652   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:45.765713   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:45.793721   53550 cri.go:89] found id: ""
	I1213 08:57:45.793734   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.793741   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:45.793746   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:45.793802   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:45.822307   53550 cri.go:89] found id: ""
	I1213 08:57:45.822320   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.822341   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:45.822347   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:45.822411   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:45.851368   53550 cri.go:89] found id: ""
	I1213 08:57:45.851382   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.851390   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:45.851395   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:45.851454   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:45.877295   53550 cri.go:89] found id: ""
	I1213 08:57:45.877308   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.877321   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:45.877326   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:45.877382   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:45.905661   53550 cri.go:89] found id: ""
	I1213 08:57:45.905674   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.905681   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:45.905686   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:45.905745   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:45.934028   53550 cri.go:89] found id: ""
	I1213 08:57:45.934042   53550 logs.go:282] 0 containers: []
	W1213 08:57:45.934050   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:45.934058   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:45.934068   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:45.962148   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:45.962164   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:46.017986   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:46.018005   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:46.031923   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:46.031939   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:46.106367   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:46.098570   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.099183   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.100789   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.101326   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.102477   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:46.098570   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.099183   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.100789   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.101326   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:46.102477   15773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:46.106379   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:46.106389   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:48.670805   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:48.680874   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:48.680935   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:48.704942   53550 cri.go:89] found id: ""
	I1213 08:57:48.704955   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.704962   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:48.704968   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:48.705029   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:48.729965   53550 cri.go:89] found id: ""
	I1213 08:57:48.729979   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.729986   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:48.729991   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:48.730048   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:48.754712   53550 cri.go:89] found id: ""
	I1213 08:57:48.754726   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.754733   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:48.754739   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:48.754798   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:48.786991   53550 cri.go:89] found id: ""
	I1213 08:57:48.787014   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.787021   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:48.787026   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:48.787082   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:48.812918   53550 cri.go:89] found id: ""
	I1213 08:57:48.812932   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.812939   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:48.812943   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:48.813010   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:48.841512   53550 cri.go:89] found id: ""
	I1213 08:57:48.841525   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.841533   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:48.841538   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:48.841597   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:48.866500   53550 cri.go:89] found id: ""
	I1213 08:57:48.866514   53550 logs.go:282] 0 containers: []
	W1213 08:57:48.866521   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:48.866529   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:48.866539   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:48.922975   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:48.922993   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:48.933525   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:48.933540   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:48.995831   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:48.987715   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.988587   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990160   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990461   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.991987   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:48.987715   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.988587   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990160   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.990461   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:48.991987   15862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:48.995841   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:48.995852   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:49.061866   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:49.061885   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:51.594845   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:51.606962   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:51.607021   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:51.630371   53550 cri.go:89] found id: ""
	I1213 08:57:51.630390   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.630397   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:51.630402   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:51.630456   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:51.655753   53550 cri.go:89] found id: ""
	I1213 08:57:51.655768   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.655775   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:51.655780   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:51.655835   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:51.680116   53550 cri.go:89] found id: ""
	I1213 08:57:51.680130   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.680136   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:51.680142   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:51.680199   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:51.703715   53550 cri.go:89] found id: ""
	I1213 08:57:51.703728   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.703734   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:51.703739   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:51.703798   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:51.728242   53550 cri.go:89] found id: ""
	I1213 08:57:51.728257   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.728263   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:51.728268   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:51.728334   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:51.752764   53550 cri.go:89] found id: ""
	I1213 08:57:51.752777   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.752783   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:51.752788   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:51.752845   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:51.776542   53550 cri.go:89] found id: ""
	I1213 08:57:51.776556   53550 logs.go:282] 0 containers: []
	W1213 08:57:51.776562   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:51.776570   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:51.776583   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:51.809113   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:51.809129   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:51.868930   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:51.868948   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:51.879570   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:51.879594   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:51.948757   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:51.940427   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.940848   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.942781   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.943205   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.944832   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:51.940427   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.940848   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.942781   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.943205   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:51.944832   15979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:51.948767   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:51.948777   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:54.516634   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:54.526661   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:54.526738   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:54.553106   53550 cri.go:89] found id: ""
	I1213 08:57:54.553120   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.553126   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:54.553132   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:54.553190   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:54.581404   53550 cri.go:89] found id: ""
	I1213 08:57:54.581417   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.581426   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:54.581430   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:54.581484   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:54.605783   53550 cri.go:89] found id: ""
	I1213 08:57:54.605796   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.605803   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:54.605807   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:54.605862   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:54.634146   53550 cri.go:89] found id: ""
	I1213 08:57:54.634160   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.634167   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:54.634171   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:54.634227   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:54.658720   53550 cri.go:89] found id: ""
	I1213 08:57:54.658734   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.658741   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:54.658746   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:54.658803   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:54.683926   53550 cri.go:89] found id: ""
	I1213 08:57:54.683940   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.683947   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:54.683952   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:54.684011   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:54.712272   53550 cri.go:89] found id: ""
	I1213 08:57:54.712286   53550 logs.go:282] 0 containers: []
	W1213 08:57:54.712293   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:54.712300   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:54.712312   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:57:54.769590   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:54.769607   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:54.781369   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:54.781386   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:54.846793   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:54.837938   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.838582   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840117   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840708   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.842407   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:54.837938   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.838582   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840117   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.840708   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:54.842407   16075 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:54.846803   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:54.846813   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:54.913758   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:54.913778   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:57.444332   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:57:57.453993   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:57:57.454058   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:57:57.478195   53550 cri.go:89] found id: ""
	I1213 08:57:57.478209   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.478225   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:57:57.478231   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:57:57.478301   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:57:57.502242   53550 cri.go:89] found id: ""
	I1213 08:57:57.502269   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.502277   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:57:57.502282   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:57:57.502346   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:57:57.525845   53550 cri.go:89] found id: ""
	I1213 08:57:57.525859   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.525867   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:57:57.525872   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:57:57.525931   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:57:57.549123   53550 cri.go:89] found id: ""
	I1213 08:57:57.549137   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.549143   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:57:57.549148   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:57:57.549203   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:57:57.576988   53550 cri.go:89] found id: ""
	I1213 08:57:57.577002   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.577009   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:57:57.577019   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:57:57.577076   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:57:57.599837   53550 cri.go:89] found id: ""
	I1213 08:57:57.599851   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.599858   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:57:57.599864   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:57:57.599932   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:57:57.623671   53550 cri.go:89] found id: ""
	I1213 08:57:57.623685   53550 logs.go:282] 0 containers: []
	W1213 08:57:57.623693   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:57:57.623700   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:57:57.623711   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:57:57.634031   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:57:57.634046   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:57:57.695658   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:57:57.687110   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.688029   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.689755   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.690097   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.691681   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:57:57.687110   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.688029   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.689755   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.690097   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:57:57.691681   16175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:57:57.695668   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:57:57.695678   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:57:57.762393   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:57:57.762412   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:57:57.790711   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:57:57.790726   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:00.355817   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:00.372076   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:00.372142   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:00.409377   53550 cri.go:89] found id: ""
	I1213 08:58:00.409392   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.409398   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:00.409404   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:00.409467   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:00.436239   53550 cri.go:89] found id: ""
	I1213 08:58:00.436254   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.436261   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:00.436266   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:00.436326   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:00.461909   53550 cri.go:89] found id: ""
	I1213 08:58:00.461922   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.461929   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:00.461934   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:00.461991   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:00.491257   53550 cri.go:89] found id: ""
	I1213 08:58:00.491270   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.491276   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:00.491281   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:00.491339   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:00.517632   53550 cri.go:89] found id: ""
	I1213 08:58:00.517646   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.517658   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:00.517664   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:00.517726   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:00.543370   53550 cri.go:89] found id: ""
	I1213 08:58:00.543384   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.543391   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:00.543396   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:00.543460   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:00.568967   53550 cri.go:89] found id: ""
	I1213 08:58:00.568980   53550 logs.go:282] 0 containers: []
	W1213 08:58:00.568987   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:00.568995   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:00.569005   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:00.636984   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:00.628336   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629304   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629910   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.631578   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.632039   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:00.628336   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629304   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.629910   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.631578   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:00.632039   16279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:00.636994   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:00.637006   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:00.699893   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:00.699911   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:00.730182   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:00.730198   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:00.787828   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:00.787847   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:03.298762   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:03.310337   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:03.310399   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:03.345482   53550 cri.go:89] found id: ""
	I1213 08:58:03.345496   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.345503   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:03.345508   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:03.345568   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:03.370651   53550 cri.go:89] found id: ""
	I1213 08:58:03.370664   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.370671   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:03.370676   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:03.370730   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:03.393554   53550 cri.go:89] found id: ""
	I1213 08:58:03.393568   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.393574   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:03.393580   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:03.393638   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:03.418084   53550 cri.go:89] found id: ""
	I1213 08:58:03.418098   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.418105   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:03.418110   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:03.418180   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:03.442426   53550 cri.go:89] found id: ""
	I1213 08:58:03.442440   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.442447   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:03.442451   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:03.442510   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:03.467378   53550 cri.go:89] found id: ""
	I1213 08:58:03.467391   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.467398   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:03.467404   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:03.467539   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:03.493640   53550 cri.go:89] found id: ""
	I1213 08:58:03.493653   53550 logs.go:282] 0 containers: []
	W1213 08:58:03.493660   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:03.493668   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:03.493678   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:03.559295   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:03.551013   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.551897   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553496   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553816   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.555334   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:03.551013   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.551897   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553496   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.553816   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:03.555334   16382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:03.559305   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:03.559315   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:03.622616   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:03.622633   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:03.656517   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:03.656534   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:03.715111   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:03.715131   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:06.226614   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:06.237139   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:06.237200   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:06.261636   53550 cri.go:89] found id: ""
	I1213 08:58:06.261652   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.261659   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:06.261664   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:06.261727   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:06.293692   53550 cri.go:89] found id: ""
	I1213 08:58:06.293707   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.293714   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:06.293719   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:06.293778   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:06.321565   53550 cri.go:89] found id: ""
	I1213 08:58:06.321578   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.321584   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:06.321589   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:06.321643   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:06.348809   53550 cri.go:89] found id: ""
	I1213 08:58:06.348856   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.348862   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:06.348869   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:06.348925   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:06.378146   53550 cri.go:89] found id: ""
	I1213 08:58:06.378159   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.378166   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:06.378171   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:06.378227   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:06.402993   53550 cri.go:89] found id: ""
	I1213 08:58:06.403006   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.403013   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:06.403019   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:06.403074   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:06.429062   53550 cri.go:89] found id: ""
	I1213 08:58:06.429076   53550 logs.go:282] 0 containers: []
	W1213 08:58:06.429084   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:06.429092   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:06.429102   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:06.485200   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:06.485218   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:06.496017   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:06.496033   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:06.561266   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:06.552621   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.553412   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.554963   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.555537   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.557278   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:06.552621   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.553412   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.554963   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.555537   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:06.557278   16493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:06.561275   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:06.561299   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:06.624429   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:06.624451   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:09.152326   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:09.162496   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:09.162552   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:09.187570   53550 cri.go:89] found id: ""
	I1213 08:58:09.187583   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.187590   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:09.187595   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:09.187653   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:09.211361   53550 cri.go:89] found id: ""
	I1213 08:58:09.211375   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.211382   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:09.211387   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:09.211441   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:09.240289   53550 cri.go:89] found id: ""
	I1213 08:58:09.240302   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.240310   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:09.240316   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:09.240381   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:09.263680   53550 cri.go:89] found id: ""
	I1213 08:58:09.263694   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.263701   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:09.263706   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:09.263767   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:09.289437   53550 cri.go:89] found id: ""
	I1213 08:58:09.289451   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.289458   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:09.289463   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:09.289524   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:09.323385   53550 cri.go:89] found id: ""
	I1213 08:58:09.323398   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.323405   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:09.323410   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:09.323467   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:09.353577   53550 cri.go:89] found id: ""
	I1213 08:58:09.353590   53550 logs.go:282] 0 containers: []
	W1213 08:58:09.353597   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:09.353605   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:09.353616   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:09.382787   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:09.382803   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:09.449042   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:09.449060   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:09.460226   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:09.460242   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:09.528091   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:09.519747   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.520489   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522369   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522869   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.524292   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:09.519747   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.520489   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522369   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.522869   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:09.524292   16609 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:09.528102   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:09.528112   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:12.097937   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:12.108009   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:12.108068   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:12.131531   53550 cri.go:89] found id: ""
	I1213 08:58:12.131546   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.131553   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:12.131558   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:12.131621   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:12.161149   53550 cri.go:89] found id: ""
	I1213 08:58:12.161163   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.161170   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:12.161175   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:12.161237   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:12.187318   53550 cri.go:89] found id: ""
	I1213 08:58:12.187332   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.187339   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:12.187344   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:12.187400   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:12.212736   53550 cri.go:89] found id: ""
	I1213 08:58:12.212749   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.212756   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:12.212761   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:12.212818   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:12.236946   53550 cri.go:89] found id: ""
	I1213 08:58:12.236959   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.236967   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:12.236973   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:12.237036   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:12.260663   53550 cri.go:89] found id: ""
	I1213 08:58:12.260677   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.260683   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:12.260690   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:12.260746   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:12.292004   53550 cri.go:89] found id: ""
	I1213 08:58:12.292022   53550 logs.go:282] 0 containers: []
	W1213 08:58:12.292030   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:12.292038   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:12.292055   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:12.338118   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:12.338134   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:12.397489   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:12.397527   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:12.408810   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:12.408834   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:12.471195   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:12.462890   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.463457   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465096   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465641   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.467207   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:12.462890   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.463457   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465096   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.465641   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:12.467207   16714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:12.471207   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:12.471217   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:15.035075   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:15.046491   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:15.046557   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:15.073355   53550 cri.go:89] found id: ""
	I1213 08:58:15.073368   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.073375   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:15.073381   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:15.073444   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:15.098531   53550 cri.go:89] found id: ""
	I1213 08:58:15.098545   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.098553   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:15.098558   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:15.098620   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:15.125009   53550 cri.go:89] found id: ""
	I1213 08:58:15.125024   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.125031   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:15.125036   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:15.125096   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:15.150565   53550 cri.go:89] found id: ""
	I1213 08:58:15.150579   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.150586   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:15.150591   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:15.150650   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:15.176538   53550 cri.go:89] found id: ""
	I1213 08:58:15.176552   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.176559   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:15.176564   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:15.176622   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:15.200435   53550 cri.go:89] found id: ""
	I1213 08:58:15.200449   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.200472   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:15.200477   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:15.200554   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:15.224596   53550 cri.go:89] found id: ""
	I1213 08:58:15.224610   53550 logs.go:282] 0 containers: []
	W1213 08:58:15.224617   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:15.224625   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:15.224636   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:15.299267   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:15.288531   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.289386   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292038   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292350   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.293775   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:15.288531   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.289386   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292038   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.292350   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:15.293775   16795 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:15.299277   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:15.299287   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:15.370114   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:15.370160   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:15.400555   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:15.400569   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:15.458044   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:15.458062   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:17.970291   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:17.980756   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:17.980816   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:18.009453   53550 cri.go:89] found id: ""
	I1213 08:58:18.009470   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.009478   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:18.009483   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:18.009912   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:18.040545   53550 cri.go:89] found id: ""
	I1213 08:58:18.040560   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.040567   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:18.040572   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:18.040634   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:18.065694   53550 cri.go:89] found id: ""
	I1213 08:58:18.065711   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.065721   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:18.065727   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:18.065795   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:18.091133   53550 cri.go:89] found id: ""
	I1213 08:58:18.091147   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.091155   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:18.091169   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:18.091228   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:18.118236   53550 cri.go:89] found id: ""
	I1213 08:58:18.118250   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.118257   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:18.118262   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:18.118321   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:18.141948   53550 cri.go:89] found id: ""
	I1213 08:58:18.141961   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.141968   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:18.141974   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:18.142030   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:18.167116   53550 cri.go:89] found id: ""
	I1213 08:58:18.167130   53550 logs.go:282] 0 containers: []
	W1213 08:58:18.167137   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:18.167145   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:18.167158   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:18.242811   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:18.234010   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.235596   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.236202   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237378   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237833   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:18.234010   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.235596   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.236202   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237378   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:18.237833   16902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:18.242822   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:18.242833   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:18.314955   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:18.314974   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:18.343207   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:18.343222   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:18.398868   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:18.398887   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:20.911155   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:20.921270   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:20.921329   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:20.949336   53550 cri.go:89] found id: ""
	I1213 08:58:20.949350   53550 logs.go:282] 0 containers: []
	W1213 08:58:20.949356   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:20.949361   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:20.949418   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:20.973382   53550 cri.go:89] found id: ""
	I1213 08:58:20.973395   53550 logs.go:282] 0 containers: []
	W1213 08:58:20.973402   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:20.973408   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:20.973470   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:21.009413   53550 cri.go:89] found id: ""
	I1213 08:58:21.009431   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.009439   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:21.009444   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:21.009508   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:21.038840   53550 cri.go:89] found id: ""
	I1213 08:58:21.038898   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.038906   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:21.038913   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:21.038981   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:21.062283   53550 cri.go:89] found id: ""
	I1213 08:58:21.062296   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.062303   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:21.062308   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:21.062430   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:21.086629   53550 cri.go:89] found id: ""
	I1213 08:58:21.086643   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.086650   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:21.086655   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:21.086725   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:21.113708   53550 cri.go:89] found id: ""
	I1213 08:58:21.113722   53550 logs.go:282] 0 containers: []
	W1213 08:58:21.113729   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:21.113737   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:21.113749   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:21.169462   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:21.169481   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:21.180306   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:21.180328   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:21.242376   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:21.233989   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.234781   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236321   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236625   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.238091   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:21.233989   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.234781   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236321   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.236625   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:21.238091   17009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:21.242386   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:21.242400   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:21.306044   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:21.306063   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:23.838510   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:23.848550   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 08:58:23.848611   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 08:58:23.878675   53550 cri.go:89] found id: ""
	I1213 08:58:23.878689   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.878697   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 08:58:23.878702   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 08:58:23.878770   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 08:58:23.904045   53550 cri.go:89] found id: ""
	I1213 08:58:23.904060   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.904067   53550 logs.go:284] No container was found matching "etcd"
	I1213 08:58:23.904072   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 08:58:23.904142   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 08:58:23.929949   53550 cri.go:89] found id: ""
	I1213 08:58:23.929963   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.929970   53550 logs.go:284] No container was found matching "coredns"
	I1213 08:58:23.929975   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 08:58:23.930035   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 08:58:23.955048   53550 cri.go:89] found id: ""
	I1213 08:58:23.955062   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.955069   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 08:58:23.955078   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 08:58:23.955136   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 08:58:23.979633   53550 cri.go:89] found id: ""
	I1213 08:58:23.979647   53550 logs.go:282] 0 containers: []
	W1213 08:58:23.979654   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 08:58:23.979659   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 08:58:23.979716   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 08:58:24.006479   53550 cri.go:89] found id: ""
	I1213 08:58:24.006495   53550 logs.go:282] 0 containers: []
	W1213 08:58:24.006503   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 08:58:24.006520   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 08:58:24.006593   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 08:58:24.033349   53550 cri.go:89] found id: ""
	I1213 08:58:24.033369   53550 logs.go:282] 0 containers: []
	W1213 08:58:24.033376   53550 logs.go:284] No container was found matching "kindnet"
	I1213 08:58:24.033385   53550 logs.go:123] Gathering logs for container status ...
	I1213 08:58:24.033395   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 08:58:24.060616   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 08:58:24.060635   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 08:58:24.119305   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 08:58:24.119324   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 08:58:24.130335   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 08:58:24.130350   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 08:58:24.197036   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 08:58:24.188772   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.189749   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191420   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191829   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.193342   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 08:58:24.188772   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.189749   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191420   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.191829   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 08:58:24.193342   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 08:58:24.197046   53550 logs.go:123] Gathering logs for containerd ...
	I1213 08:58:24.197058   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 08:58:26.764306   53550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:58:26.775859   53550 kubeadm.go:602] duration metric: took 4m4.554296141s to restartPrimaryControlPlane
	W1213 08:58:26.775922   53550 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 08:58:26.776056   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 08:58:27.191363   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:58:27.204546   53550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:58:27.212501   53550 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 08:58:27.212553   53550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:58:27.220364   53550 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 08:58:27.220373   53550 kubeadm.go:158] found existing configuration files:
	
	I1213 08:58:27.220423   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 08:58:27.228123   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 08:58:27.228179   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 08:58:27.235737   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 08:58:27.243839   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 08:58:27.243909   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:58:27.251406   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 08:58:27.259128   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 08:58:27.259197   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:58:27.266406   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 08:58:27.274290   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 08:58:27.274347   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:58:27.281913   53550 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 08:58:27.321302   53550 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 08:58:27.321349   53550 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 08:58:27.394605   53550 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 08:58:27.394672   53550 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 08:58:27.394706   53550 kubeadm.go:319] OS: Linux
	I1213 08:58:27.394750   53550 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 08:58:27.394798   53550 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 08:58:27.394844   53550 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 08:58:27.394891   53550 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 08:58:27.394938   53550 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 08:58:27.394984   53550 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 08:58:27.395028   53550 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 08:58:27.395075   53550 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 08:58:27.395120   53550 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 08:58:27.462440   53550 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 08:58:27.462546   53550 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 08:58:27.462635   53550 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 08:58:27.476078   53550 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 08:58:27.481288   53550 out.go:252]   - Generating certificates and keys ...
	I1213 08:58:27.481378   53550 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 08:58:27.481454   53550 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 08:58:27.481542   53550 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 08:58:27.481611   53550 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 08:58:27.481690   53550 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 08:58:27.481750   53550 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 08:58:27.481822   53550 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 08:58:27.481892   53550 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 08:58:27.481974   53550 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 08:58:27.482055   53550 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 08:58:27.482101   53550 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 08:58:27.482165   53550 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 08:58:27.905850   53550 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 08:58:28.178703   53550 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 08:58:28.541521   53550 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 08:58:28.686915   53550 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 08:58:29.281245   53550 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 08:58:29.281953   53550 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 08:58:29.285342   53550 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 08:58:29.288544   53550 out.go:252]   - Booting up control plane ...
	I1213 08:58:29.288640   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 08:58:29.288718   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 08:58:29.289378   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 08:58:29.310312   53550 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 08:58:29.310629   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 08:58:29.318324   53550 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 08:58:29.318581   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 08:58:29.318622   53550 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 08:58:29.457400   53550 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 08:58:29.457506   53550 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:02:29.458561   53550 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001216357s
	I1213 09:02:29.458592   53550 kubeadm.go:319] 
	I1213 09:02:29.458674   53550 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:02:29.458746   53550 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:02:29.458876   53550 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:02:29.458882   53550 kubeadm.go:319] 
	I1213 09:02:29.458995   53550 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:02:29.459029   53550 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:02:29.459061   53550 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:02:29.459065   53550 kubeadm.go:319] 
	I1213 09:02:29.463013   53550 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:02:29.463412   53550 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:02:29.463534   53550 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:02:29.463755   53550 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:02:29.463760   53550 kubeadm.go:319] 
	I1213 09:02:29.463824   53550 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 09:02:29.463944   53550 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001216357s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 09:02:29.464028   53550 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 09:02:29.874512   53550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:02:29.888184   53550 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:02:29.888240   53550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:02:29.896053   53550 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:02:29.896063   53550 kubeadm.go:158] found existing configuration files:
	
	I1213 09:02:29.896114   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 09:02:29.904008   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:02:29.904062   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:02:29.911453   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 09:02:29.919369   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:02:29.919421   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:02:29.927024   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 09:02:29.934996   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:02:29.935050   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:02:29.942367   53550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 09:02:29.949946   53550 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:02:29.950000   53550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:02:29.957647   53550 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:02:29.995750   53550 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:02:29.995800   53550 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:02:30.116553   53550 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:02:30.116615   53550 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 09:02:30.116649   53550 kubeadm.go:319] OS: Linux
	I1213 09:02:30.116693   53550 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:02:30.116740   53550 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:02:30.116785   53550 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:02:30.116832   53550 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:02:30.116879   53550 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:02:30.116934   53550 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:02:30.116978   53550 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:02:30.117024   53550 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:02:30.117071   53550 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:02:30.188905   53550 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:02:30.189016   53550 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:02:30.189118   53550 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:02:30.196039   53550 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:02:30.201335   53550 out.go:252]   - Generating certificates and keys ...
	I1213 09:02:30.201440   53550 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:02:30.201521   53550 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:02:30.201609   53550 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:02:30.201670   53550 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:02:30.201747   53550 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:02:30.201835   53550 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:02:30.201908   53550 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:02:30.201970   53550 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:02:30.202045   53550 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:02:30.202116   53550 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:02:30.202153   53550 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:02:30.202209   53550 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:02:30.255550   53550 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:02:30.417221   53550 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:02:30.868435   53550 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:02:31.140633   53550 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:02:31.298069   53550 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:02:31.298995   53550 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:02:31.302412   53550 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:02:31.305750   53550 out.go:252]   - Booting up control plane ...
	I1213 09:02:31.305854   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:02:31.305930   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:02:31.305995   53550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:02:31.327053   53550 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:02:31.327169   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:02:31.334414   53550 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:02:31.334677   53550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:02:31.334719   53550 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:02:31.474852   53550 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:02:31.474965   53550 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:06:31.473943   53550 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000237859s
	I1213 09:06:31.473980   53550 kubeadm.go:319] 
	I1213 09:06:31.474081   53550 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:06:31.474292   53550 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:06:31.474479   53550 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:06:31.474488   53550 kubeadm.go:319] 
	I1213 09:06:31.474674   53550 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:06:31.474967   53550 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:06:31.475021   53550 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:06:31.475025   53550 kubeadm.go:319] 
	I1213 09:06:31.479982   53550 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:06:31.480734   53550 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:06:31.480923   53550 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:06:31.481347   53550 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:06:31.481355   53550 kubeadm.go:319] 
	I1213 09:06:31.481475   53550 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:06:31.481540   53550 kubeadm.go:403] duration metric: took 12m9.29303151s to StartCluster
	I1213 09:06:31.481569   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:06:31.481637   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:06:31.505490   53550 cri.go:89] found id: ""
	I1213 09:06:31.505505   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.505511   53550 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:06:31.505516   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:06:31.505576   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:06:31.533408   53550 cri.go:89] found id: ""
	I1213 09:06:31.533422   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.533429   53550 logs.go:284] No container was found matching "etcd"
	I1213 09:06:31.533433   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:06:31.533495   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:06:31.563195   53550 cri.go:89] found id: ""
	I1213 09:06:31.563218   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.563225   53550 logs.go:284] No container was found matching "coredns"
	I1213 09:06:31.563230   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:06:31.563288   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:06:31.588179   53550 cri.go:89] found id: ""
	I1213 09:06:31.588192   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.588199   53550 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:06:31.588204   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:06:31.588262   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:06:31.613124   53550 cri.go:89] found id: ""
	I1213 09:06:31.613137   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.613144   53550 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:06:31.613149   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:06:31.613204   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:06:31.637268   53550 cri.go:89] found id: ""
	I1213 09:06:31.637282   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.637297   53550 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:06:31.637303   53550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:06:31.637360   53550 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:06:31.661188   53550 cri.go:89] found id: ""
	I1213 09:06:31.661208   53550 logs.go:282] 0 containers: []
	W1213 09:06:31.661214   53550 logs.go:284] No container was found matching "kindnet"
	I1213 09:06:31.661223   53550 logs.go:123] Gathering logs for container status ...
	I1213 09:06:31.661232   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:06:31.690241   53550 logs.go:123] Gathering logs for kubelet ...
	I1213 09:06:31.690257   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:06:31.745899   53550 logs.go:123] Gathering logs for dmesg ...
	I1213 09:06:31.745917   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:06:31.756123   53550 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:06:31.756137   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:06:31.847485   53550 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:06:31.839464   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.840155   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.841679   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.842167   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.843750   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 09:06:31.839464   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.840155   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.841679   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.842167   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:06:31.843750   20940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:06:31.847496   53550 logs.go:123] Gathering logs for containerd ...
	I1213 09:06:31.847506   53550 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1213 09:06:31.908510   53550 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 09:06:31.908551   53550 out.go:285] * 
	W1213 09:06:31.908654   53550 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:06:31.908704   53550 out.go:285] * 
	W1213 09:06:31.910815   53550 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:06:31.916295   53550 out.go:203] 
	W1213 09:06:31.920097   53550 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000237859s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:06:31.920144   53550 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 09:06:31.920163   53550 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 09:06:31.923856   53550 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604408682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604420440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604455862Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604474439Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604487756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604500244Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604511765Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604526116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604543576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604572007Z" level=info msg="Connect containerd service"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604858731Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.605398020Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621543154Z" level=info msg="Start subscribing containerd event"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621557350Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621826530Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621766673Z" level=info msg="Start recovering state"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663604117Z" level=info msg="Start event monitor"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663796472Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663882619Z" level=info msg="Start streaming server"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663973525Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664032545Z" level=info msg="runtime interface starting up..."
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664089374Z" level=info msg="starting plugins..."
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664157059Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 08:54:20 functional-074420 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.666240376Z" level=info msg="containerd successfully booted in 0.083219s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:19.509011   22372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:19.509857   22372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:19.511650   22372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:19.512305   22372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:19.513985   22372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 09:08:19 up 50 min,  0 user,  load average: 0.21, 0.19, 0.32
	Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:08:16 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:17 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 460.
	Dec 13 09:08:17 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:17 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:17 functional-074420 kubelet[22260]: E1213 09:08:17.070474   22260 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:17 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:17 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:17 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 461.
	Dec 13 09:08:17 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:17 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:17 functional-074420 kubelet[22265]: E1213 09:08:17.821431   22265 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:17 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:17 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:18 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 462.
	Dec 13 09:08:18 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:18 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:18 functional-074420 kubelet[22272]: E1213 09:08:18.571591   22272 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:18 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:18 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:19 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 463.
	Dec 13 09:08:19 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:19 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:19 functional-074420 kubelet[22320]: E1213 09:08:19.336873   22320 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:19 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:19 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 2 (351.190109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 09:06:50.169924    4120 retry.go:31] will retry after 2.904682402s: Temporary Error: Get "http://10.99.49.188": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 09:07:03.075758    4120 retry.go:31] will retry after 2.828012839s: Temporary Error: Get "http://10.99.49.188": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 09:07:14.443741    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 09:07:15.905631    4120 retry.go:31] will retry after 6.283098361s: Temporary Error: Get "http://10.99.49.188": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 09:07:32.189804    4120 retry.go:31] will retry after 9.287705321s: Temporary Error: Get "http://10.99.49.188": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 09:07:51.477825    4120 retry.go:31] will retry after 16.33550767s: Temporary Error: Get "http://10.99.49.188": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 09:10:17.518318    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 2 (309.974122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	        "Created": "2025-12-13T08:39:40.050933605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:39:40.114566965Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
	        "Name": "/functional-074420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	                "LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-074420",
	                "Source": "/var/lib/docker/volumes/functional-074420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074420",
	                "name.minikube.sigs.k8s.io": "functional-074420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
	            "SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e0:c5:f8:aa:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
	                    "EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074420",
	                        "662fb3d52b3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 2 (330.038696ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-074420 image load --daemon kicbase/echo-server:functional-074420 --alsologtostderr                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ image          │ functional-074420 image ls                                                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ image          │ functional-074420 image save kicbase/echo-server:functional-074420 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ image          │ functional-074420 image rm kicbase/echo-server:functional-074420 --alsologtostderr                                                                              │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ image          │ functional-074420 image ls                                                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ image          │ functional-074420 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ image          │ functional-074420 image ls                                                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ image          │ functional-074420 image save --daemon kicbase/echo-server:functional-074420 --alsologtostderr                                                                   │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh            │ functional-074420 ssh sudo cat /etc/test/nested/copy/4120/hosts                                                                                                 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh            │ functional-074420 ssh sudo cat /etc/ssl/certs/4120.pem                                                                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh            │ functional-074420 ssh sudo cat /usr/share/ca-certificates/4120.pem                                                                                              │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh            │ functional-074420 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh            │ functional-074420 ssh sudo cat /etc/ssl/certs/41202.pem                                                                                                         │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh            │ functional-074420 ssh sudo cat /usr/share/ca-certificates/41202.pem                                                                                             │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh            │ functional-074420 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ image          │ functional-074420 image ls --format short --alsologtostderr                                                                                                     │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ image          │ functional-074420 image ls --format yaml --alsologtostderr                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh            │ functional-074420 ssh pgrep buildkitd                                                                                                                           │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ image          │ functional-074420 image build -t localhost/my-image:functional-074420 testdata/build --alsologtostderr                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ image          │ functional-074420 image ls                                                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ image          │ functional-074420 image ls --format json --alsologtostderr                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ image          │ functional-074420 image ls --format table --alsologtostderr                                                                                                     │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ update-context │ functional-074420 update-context --alsologtostderr -v=2                                                                                                         │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ update-context │ functional-074420 update-context --alsologtostderr -v=2                                                                                                         │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ update-context │ functional-074420 update-context --alsologtostderr -v=2                                                                                                         │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:08:35
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:08:35.252578   70821 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:08:35.252689   70821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:08:35.252702   70821 out.go:374] Setting ErrFile to fd 2...
	I1213 09:08:35.252708   70821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:08:35.252952   70821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:08:35.253296   70821 out.go:368] Setting JSON to false
	I1213 09:08:35.254045   70821 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3068,"bootTime":1765613848,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:08:35.254113   70821 start.go:143] virtualization:  
	I1213 09:08:35.257261   70821 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:08:35.260972   70821 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:08:35.261121   70821 notify.go:221] Checking for updates...
	I1213 09:08:35.266518   70821 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:08:35.269328   70821 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:08:35.272289   70821 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:08:35.275299   70821 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:08:35.278118   70821 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:08:35.281471   70821 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:08:35.282073   70821 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:08:35.321729   70821 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:08:35.321869   70821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:08:35.382331   70821 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:08:35.373170311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:08:35.382437   70821 docker.go:319] overlay module found
	I1213 09:08:35.387422   70821 out.go:179] * Using the docker driver based on existing profile
	I1213 09:08:35.390338   70821 start.go:309] selected driver: docker
	I1213 09:08:35.390361   70821 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:08:35.390461   70821 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:08:35.390572   70821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:08:35.442520   70821 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:08:35.433498577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:08:35.442956   70821 cni.go:84] Creating CNI manager for ""
	I1213 09:08:35.443020   70821 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:08:35.443065   70821 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:08:35.446147   70821 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 09:08:41 functional-074420 containerd[9680]: time="2025-12-13T09:08:41.596320207Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:08:41 functional-074420 containerd[9680]: time="2025-12-13T09:08:41.596995200Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-074420\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:08:42 functional-074420 containerd[9680]: time="2025-12-13T09:08:42.696770881Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-074420\""
	Dec 13 09:08:42 functional-074420 containerd[9680]: time="2025-12-13T09:08:42.699796160Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-074420\""
	Dec 13 09:08:42 functional-074420 containerd[9680]: time="2025-12-13T09:08:42.702308153Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 09:08:42 functional-074420 containerd[9680]: time="2025-12-13T09:08:42.713065360Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-074420\" returns successfully"
	Dec 13 09:08:42 functional-074420 containerd[9680]: time="2025-12-13T09:08:42.945912554Z" level=info msg="No images store for sha256:30d2fe65ae0aee052adabc2c19262319c4583e2bc0c3dc423fc815e82ed1a86d"
	Dec 13 09:08:42 functional-074420 containerd[9680]: time="2025-12-13T09:08:42.947940046Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-074420\""
	Dec 13 09:08:42 functional-074420 containerd[9680]: time="2025-12-13T09:08:42.955990591Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:08:42 functional-074420 containerd[9680]: time="2025-12-13T09:08:42.956343276Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-074420\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:08:43 functional-074420 containerd[9680]: time="2025-12-13T09:08:43.758714857Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-074420\""
	Dec 13 09:08:43 functional-074420 containerd[9680]: time="2025-12-13T09:08:43.761137668Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-074420\""
	Dec 13 09:08:43 functional-074420 containerd[9680]: time="2025-12-13T09:08:43.763051846Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 13 09:08:43 functional-074420 containerd[9680]: time="2025-12-13T09:08:43.771701462Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-074420\" returns successfully"
	Dec 13 09:08:44 functional-074420 containerd[9680]: time="2025-12-13T09:08:44.458431580Z" level=info msg="No images store for sha256:0b2ad14710c96ff1497ad7cc9faf5563d84fea51d3cbaaf420f94ee277de5723"
	Dec 13 09:08:44 functional-074420 containerd[9680]: time="2025-12-13T09:08:44.460744949Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-074420\""
	Dec 13 09:08:44 functional-074420 containerd[9680]: time="2025-12-13T09:08:44.470232818Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:08:44 functional-074420 containerd[9680]: time="2025-12-13T09:08:44.470891926Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-074420\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:08:51 functional-074420 containerd[9680]: time="2025-12-13T09:08:51.082572988Z" level=info msg="connecting to shim is125dtbdtu3ouurxqd2pjplc" address="unix:///run/containerd/s/d95f0fc4c2f88c872a3b7f37bcdea367f5a5f89f3a38b8209905d35f540342a6" namespace=k8s.io protocol=ttrpc version=3
	Dec 13 09:08:51 functional-074420 containerd[9680]: time="2025-12-13T09:08:51.152623265Z" level=info msg="shim disconnected" id=is125dtbdtu3ouurxqd2pjplc namespace=k8s.io
	Dec 13 09:08:51 functional-074420 containerd[9680]: time="2025-12-13T09:08:51.153464407Z" level=info msg="cleaning up after shim disconnected" id=is125dtbdtu3ouurxqd2pjplc namespace=k8s.io
	Dec 13 09:08:51 functional-074420 containerd[9680]: time="2025-12-13T09:08:51.153499525Z" level=info msg="cleaning up dead shim" id=is125dtbdtu3ouurxqd2pjplc namespace=k8s.io
	Dec 13 09:08:51 functional-074420 containerd[9680]: time="2025-12-13T09:08:51.443242937Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-074420\""
	Dec 13 09:08:51 functional-074420 containerd[9680]: time="2025-12-13T09:08:51.451415149Z" level=info msg="ImageCreate event name:\"sha256:0edb90c0573c14c80840840e3427f11b2e2753cf04732bf7213aace1811bc560\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:08:51 functional-074420 containerd[9680]: time="2025-12-13T09:08:51.451811584Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-074420\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:10:41.838629   25128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.839046   25128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.840755   25128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.841203   25128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:10:41.842950   25128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 09:10:41 up 53 min,  0 user,  load average: 0.29, 0.30, 0.35
	Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:10:38 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:10:39 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 13 09:10:39 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:10:39 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:10:39 functional-074420 kubelet[25004]: E1213 09:10:39.567304   25004 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:10:39 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:10:39 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:10:40 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 13 09:10:40 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:10:40 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:10:40 functional-074420 kubelet[25010]: E1213 09:10:40.318384   25010 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:10:40 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:10:40 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:10:41 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 13 09:10:41 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:10:41 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:10:41 functional-074420 kubelet[25031]: E1213 09:10:41.091335   25031 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:10:41 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:10:41 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:10:41 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 13 09:10:41 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:10:41 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:10:41 functional-074420 kubelet[25133]: E1213 09:10:41.829350   25133 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:10:41 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:10:41 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 2 (355.651668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-074420 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-074420 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (60.787282ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-074420 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	        "Created": "2025-12-13T08:39:40.050933605Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:39:40.114566965Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
	        "Name": "/functional-074420",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-074420:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-074420",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
	                "LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-074420",
	                "Source": "/var/lib/docker/volumes/functional-074420/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-074420",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-074420",
	                "name.minikube.sigs.k8s.io": "functional-074420",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
	            "SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-074420": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:e0:c5:f8:aa:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
	                    "EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-074420",
	                        "662fb3d52b3e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 2 (328.144809ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-074420 service hello-node --url                                                                                                          │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ mount     │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001:/mount-9p --alsologtostderr -v=1               │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh -- ls -la /mount-9p                                                                                                           │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh cat /mount-9p/test-1765616905262471887                                                                                        │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh sudo umount -f /mount-9p                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ mount     │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1439982161/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh -- ls -la /mount-9p                                                                                                           │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh sudo umount -f /mount-9p                                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ mount     │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount1 --alsologtostderr -v=1                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh findmnt -T /mount1                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ mount     │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount3 --alsologtostderr -v=1                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ mount     │ -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount2 --alsologtostderr -v=1                │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ ssh       │ functional-074420 ssh findmnt -T /mount1                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh findmnt -T /mount2                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ ssh       │ functional-074420 ssh findmnt -T /mount3                                                                                                            │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │ 13 Dec 25 09:08 UTC │
	│ mount     │ -p functional-074420 --kill=true                                                                                                                    │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ start     │ -p functional-074420 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ start     │ -p functional-074420 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ start     │ -p functional-074420 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-074420 --alsologtostderr -v=1                                                                                      │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 09:08 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:08:35
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:08:35.252578   70821 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:08:35.252689   70821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:08:35.252702   70821 out.go:374] Setting ErrFile to fd 2...
	I1213 09:08:35.252708   70821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:08:35.252952   70821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:08:35.253296   70821 out.go:368] Setting JSON to false
	I1213 09:08:35.254045   70821 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3068,"bootTime":1765613848,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:08:35.254113   70821 start.go:143] virtualization:  
	I1213 09:08:35.257261   70821 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:08:35.260972   70821 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:08:35.261121   70821 notify.go:221] Checking for updates...
	I1213 09:08:35.266518   70821 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:08:35.269328   70821 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:08:35.272289   70821 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:08:35.275299   70821 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:08:35.278118   70821 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:08:35.281471   70821 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:08:35.282073   70821 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:08:35.321729   70821 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:08:35.321869   70821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:08:35.382331   70821 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:08:35.373170311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:08:35.382437   70821 docker.go:319] overlay module found
	I1213 09:08:35.387422   70821 out.go:179] * Using the docker driver based on existing profile
	I1213 09:08:35.390338   70821 start.go:309] selected driver: docker
	I1213 09:08:35.390361   70821 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:08:35.390461   70821 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:08:35.390572   70821 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:08:35.442520   70821 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:08:35.433498577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:08:35.442956   70821 cni.go:84] Creating CNI manager for ""
	I1213 09:08:35.443020   70821 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:08:35.443065   70821 start.go:353] cluster config:
	{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:08:35.446147   70821 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604408682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604420440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604455862Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604474439Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604487756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604500244Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604511765Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604526116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604543576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604572007Z" level=info msg="Connect containerd service"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.604858731Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.605398020Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621543154Z" level=info msg="Start subscribing containerd event"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621557350Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621826530Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.621766673Z" level=info msg="Start recovering state"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663604117Z" level=info msg="Start event monitor"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663796472Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663882619Z" level=info msg="Start streaming server"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.663973525Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664032545Z" level=info msg="runtime interface starting up..."
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664089374Z" level=info msg="starting plugins..."
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.664157059Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 08:54:20 functional-074420 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 08:54:20 functional-074420 containerd[9680]: time="2025-12-13T08:54:20.666240376Z" level=info msg="containerd successfully booted in 0.083219s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 09:08:38.131348   23376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:38.132042   23376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:38.133892   23376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:38.134422   23376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 09:08:38.136211   23376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 09:08:38 up 51 min,  0 user,  load average: 0.72, 0.31, 0.36
	Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:08:35 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:35 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 13 09:08:35 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:35 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:35 functional-074420 kubelet[23140]: E1213 09:08:35.828496   23140 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:35 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:35 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:36 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 13 09:08:36 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:36 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:36 functional-074420 kubelet[23200]: E1213 09:08:36.584914   23200 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:36 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:36 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:37 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 13 09:08:37 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:37 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:37 functional-074420 kubelet[23271]: E1213 09:08:37.334155   23271 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:37 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:37 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:08:38 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 13 09:08:38 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:38 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:08:38 functional-074420 kubelet[23365]: E1213 09:08:38.087622   23365 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:08:38 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:08:38 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 2 (316.964655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-074420 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-074420 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1213 09:06:39.586691   66570 out.go:360] Setting OutFile to fd 1 ...
I1213 09:06:39.586787   66570 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:06:39.586792   66570 out.go:374] Setting ErrFile to fd 2...
I1213 09:06:39.586798   66570 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:06:39.587151   66570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 09:06:39.587428   66570 mustload.go:66] Loading cluster: functional-074420
I1213 09:06:39.588160   66570 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:06:39.588895   66570 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
I1213 09:06:39.617244   66570 host.go:66] Checking if "functional-074420" exists ...
I1213 09:06:39.617563   66570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 09:06:39.736288   66570 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:06:39.721529425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 09:06:39.736415   66570 api_server.go:166] Checking apiserver status ...
I1213 09:06:39.736471   66570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 09:06:39.736518   66570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 09:06:39.766646   66570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
W1213 09:06:39.877210   66570 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 09:06:39.880439   66570 out.go:179] * The control-plane node functional-074420 apiserver is not running: (state=Stopped)
I1213 09:06:39.883563   66570 out.go:179]   To start a cluster, run: "minikube start -p functional-074420"

                                                
                                                
stdout: * The control-plane node functional-074420 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-074420"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-074420 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 66571: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-074420 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-074420 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-074420 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-074420 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-074420 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-074420 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-074420 apply -f testdata/testsvc.yaml: exit status 1 (164.39301ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-074420 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (97.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.99.49.188": Temporary Error: Get "http://10.99.49.188": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-074420 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-074420 get svc nginx-svc: exit status 1 (69.225123ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-074420 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (97.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-074420 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-074420 create deployment hello-node --image kicbase/echo-server: exit status 1 (54.550024ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-074420 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 service list: exit status 103 (260.796783ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-074420 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-074420"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-074420 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-074420 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-074420\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 service list -o json: exit status 103 (269.388429ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-074420 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-074420"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-074420 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 service --namespace=default --https --url hello-node: exit status 103 (268.158525ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-074420 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-074420"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-074420 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 service hello-node --url --format={{.IP}}: exit status 103 (263.40815ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-074420 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-074420"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-074420 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-074420 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-074420\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 service hello-node --url: exit status 103 (268.724167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-074420 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-074420"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-074420 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-074420 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-074420"
functional_test.go:1579: failed to parse "* The control-plane node functional-074420 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-074420\"": parse "* The control-plane node functional-074420 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-074420\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765616905262471887" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765616905262471887" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765616905262471887" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001/test-1765616905262471887
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (362.30301ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 09:08:25.625032    4120 retry.go:31] will retry after 616.966827ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 09:08 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 09:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 09:08 test-1765616905262471887
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh cat /mount-9p/test-1765616905262471887
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-074420 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-074420 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (55.251914ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-074420 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (280.212688ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=36909)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 13 09:08 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 13 09:08 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 13 09:08 test-1765616905262471887
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-074420 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:36909
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001:/mount-9p --alsologtostderr -v=1] stderr:
I1213 09:08:25.380925   68870 out.go:360] Setting OutFile to fd 1 ...
I1213 09:08:25.381133   68870 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:25.381147   68870 out.go:374] Setting ErrFile to fd 2...
I1213 09:08:25.381153   68870 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:25.381409   68870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 09:08:25.381666   68870 mustload.go:66] Loading cluster: functional-074420
I1213 09:08:25.382023   68870 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:08:25.382530   68870 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
I1213 09:08:25.406190   68870 host.go:66] Checking if "functional-074420" exists ...
I1213 09:08:25.406492   68870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 09:08:25.489362   68870 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:08:25.478705652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 09:08:25.489504   68870 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 09:08:25.509294   68870 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001 into VM as /mount-9p ...
I1213 09:08:25.516811   68870 out.go:179]   - Mount type:   9p
I1213 09:08:25.519667   68870 out.go:179]   - User ID:      docker
I1213 09:08:25.522504   68870 out.go:179]   - Group ID:     docker
I1213 09:08:25.527718   68870 out.go:179]   - Version:      9p2000.L
I1213 09:08:25.530636   68870 out.go:179]   - Message Size: 262144
I1213 09:08:25.533553   68870 out.go:179]   - Options:      map[]
I1213 09:08:25.536359   68870 out.go:179]   - Bind Address: 192.168.49.1:36909
I1213 09:08:25.539183   68870 out.go:179] * Userspace file server: 
I1213 09:08:25.539477   68870 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 09:08:25.539577   68870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 09:08:25.557393   68870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
I1213 09:08:25.666079   68870 mount.go:180] unmount for /mount-9p ran successfully
I1213 09:08:25.666107   68870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1213 09:08:25.674157   68870 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=36909,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1213 09:08:25.684501   68870 main.go:127] stdlog: ufs.go:141 connected
I1213 09:08:25.684667   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tversion tag 65535 msize 262144 version '9P2000.L'
I1213 09:08:25.684704   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rversion tag 65535 msize 262144 version '9P2000'
I1213 09:08:25.684924   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1213 09:08:25.684989   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rattach tag 0 aqid (ed6ccb 16f7ec29 'd')
I1213 09:08:25.685665   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 0
I1213 09:08:25.685727   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6ccb 16f7ec29 'd') m d775 at 0 mt 1765616905 l 4096 t 0 d 0 ext )
I1213 09:08:25.689435   68870 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/.mount-process: {Name:mkc53420e6c7644bca9e6b6136ed7181cd9e81ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:08:25.689626   68870 mount.go:105] mount successful: ""
I1213 09:08:25.693033   68870 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo811051608/001 to /mount-9p
I1213 09:08:25.695819   68870 out.go:203] 
I1213 09:08:25.698609   68870 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1213 09:08:26.763808   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 0
I1213 09:08:26.763879   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6ccb 16f7ec29 'd') m d775 at 0 mt 1765616905 l 4096 t 0 d 0 ext )
I1213 09:08:26.764246   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Twalk tag 0 fid 0 newfid 1 
I1213 09:08:26.764276   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rwalk tag 0 
I1213 09:08:26.764425   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Topen tag 0 fid 1 mode 0
I1213 09:08:26.764473   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Ropen tag 0 qid (ed6ccb 16f7ec29 'd') iounit 0
I1213 09:08:26.764661   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 0
I1213 09:08:26.764692   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6ccb 16f7ec29 'd') m d775 at 0 mt 1765616905 l 4096 t 0 d 0 ext )
I1213 09:08:26.764859   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tread tag 0 fid 1 offset 0 count 262120
I1213 09:08:26.764970   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rread tag 0 count 258
I1213 09:08:26.765121   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tread tag 0 fid 1 offset 258 count 261862
I1213 09:08:26.765146   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rread tag 0 count 0
I1213 09:08:26.765293   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tread tag 0 fid 1 offset 258 count 262120
I1213 09:08:26.765321   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rread tag 0 count 0
I1213 09:08:26.765453   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 09:08:26.765492   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rwalk tag 0 (ed6ccc 16f7ec29 '') 
I1213 09:08:26.765632   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 2
I1213 09:08:26.765663   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6ccc 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:26.765788   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 2
I1213 09:08:26.765819   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6ccc 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:26.765957   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tclunk tag 0 fid 2
I1213 09:08:26.765992   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rclunk tag 0
I1213 09:08:26.766128   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Twalk tag 0 fid 0 newfid 2 0:'test-1765616905262471887' 
I1213 09:08:26.766156   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rwalk tag 0 (ed6cce 16f7ec29 '') 
I1213 09:08:26.766277   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 2
I1213 09:08:26.766305   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('test-1765616905262471887' 'jenkins' 'jenkins' '' q (ed6cce 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:26.766420   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 2
I1213 09:08:26.766451   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('test-1765616905262471887' 'jenkins' 'jenkins' '' q (ed6cce 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:26.766573   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tclunk tag 0 fid 2
I1213 09:08:26.766591   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rclunk tag 0
I1213 09:08:26.766759   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 09:08:26.766789   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rwalk tag 0 (ed6ccd 16f7ec29 '') 
I1213 09:08:26.766914   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 2
I1213 09:08:26.766942   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6ccd 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:26.767058   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 2
I1213 09:08:26.767089   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6ccd 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:26.767198   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tclunk tag 0 fid 2
I1213 09:08:26.767225   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rclunk tag 0
I1213 09:08:26.767345   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tread tag 0 fid 1 offset 258 count 262120
I1213 09:08:26.767370   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rread tag 0 count 0
I1213 09:08:26.767492   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tclunk tag 0 fid 1
I1213 09:08:26.767527   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rclunk tag 0
I1213 09:08:27.047686   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Twalk tag 0 fid 0 newfid 1 0:'test-1765616905262471887' 
I1213 09:08:27.047756   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rwalk tag 0 (ed6cce 16f7ec29 '') 
I1213 09:08:27.047914   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 1
I1213 09:08:27.047972   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('test-1765616905262471887' 'jenkins' 'jenkins' '' q (ed6cce 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:27.048142   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Twalk tag 0 fid 1 newfid 2 
I1213 09:08:27.048173   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rwalk tag 0 
I1213 09:08:27.048288   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Topen tag 0 fid 2 mode 0
I1213 09:08:27.048345   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Ropen tag 0 qid (ed6cce 16f7ec29 '') iounit 0
I1213 09:08:27.048489   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 1
I1213 09:08:27.048552   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('test-1765616905262471887' 'jenkins' 'jenkins' '' q (ed6cce 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:27.048681   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tread tag 0 fid 2 offset 0 count 262120
I1213 09:08:27.048742   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rread tag 0 count 24
I1213 09:08:27.048876   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tread tag 0 fid 2 offset 24 count 262120
I1213 09:08:27.048906   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rread tag 0 count 0
I1213 09:08:27.049043   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tread tag 0 fid 2 offset 24 count 262120
I1213 09:08:27.049075   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rread tag 0 count 0
I1213 09:08:27.049277   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tclunk tag 0 fid 2
I1213 09:08:27.049311   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rclunk tag 0
I1213 09:08:27.049492   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tclunk tag 0 fid 1
I1213 09:08:27.049521   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rclunk tag 0
I1213 09:08:27.386574   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 0
I1213 09:08:27.386660   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6ccb 16f7ec29 'd') m d775 at 0 mt 1765616905 l 4096 t 0 d 0 ext )
I1213 09:08:27.387026   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Twalk tag 0 fid 0 newfid 1 
I1213 09:08:27.387068   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rwalk tag 0 
I1213 09:08:27.387197   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Topen tag 0 fid 1 mode 0
I1213 09:08:27.387248   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Ropen tag 0 qid (ed6ccb 16f7ec29 'd') iounit 0
I1213 09:08:27.387385   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 0
I1213 09:08:27.387418   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6ccb 16f7ec29 'd') m d775 at 0 mt 1765616905 l 4096 t 0 d 0 ext )
I1213 09:08:27.387596   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tread tag 0 fid 1 offset 0 count 262120
I1213 09:08:27.387709   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rread tag 0 count 258
I1213 09:08:27.387853   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tread tag 0 fid 1 offset 258 count 261862
I1213 09:08:27.387880   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rread tag 0 count 0
I1213 09:08:27.387995   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tread tag 0 fid 1 offset 258 count 262120
I1213 09:08:27.388023   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rread tag 0 count 0
I1213 09:08:27.388217   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 09:08:27.388250   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rwalk tag 0 (ed6ccc 16f7ec29 '') 
I1213 09:08:27.388365   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 2
I1213 09:08:27.388397   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6ccc 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:27.388542   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 2
I1213 09:08:27.388576   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6ccc 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:27.388700   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tclunk tag 0 fid 2
I1213 09:08:27.388723   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rclunk tag 0
I1213 09:08:27.388873   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Twalk tag 0 fid 0 newfid 2 0:'test-1765616905262471887' 
I1213 09:08:27.388907   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rwalk tag 0 (ed6cce 16f7ec29 '') 
I1213 09:08:27.389025   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 2
I1213 09:08:27.389057   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('test-1765616905262471887' 'jenkins' 'jenkins' '' q (ed6cce 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:27.389194   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 2
I1213 09:08:27.389224   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('test-1765616905262471887' 'jenkins' 'jenkins' '' q (ed6cce 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:27.389347   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tclunk tag 0 fid 2
I1213 09:08:27.389369   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rclunk tag 0
I1213 09:08:27.389513   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 09:08:27.389547   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rwalk tag 0 (ed6ccd 16f7ec29 '') 
I1213 09:08:27.389697   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 2
I1213 09:08:27.389755   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6ccd 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:27.389910   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tstat tag 0 fid 2
I1213 09:08:27.389941   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6ccd 16f7ec29 '') m 644 at 0 mt 1765616905 l 24 t 0 d 0 ext )
I1213 09:08:27.390090   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tclunk tag 0 fid 2
I1213 09:08:27.390113   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rclunk tag 0
I1213 09:08:27.390247   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tread tag 0 fid 1 offset 258 count 262120
I1213 09:08:27.390278   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rread tag 0 count 0
I1213 09:08:27.390415   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tclunk tag 0 fid 1
I1213 09:08:27.390446   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rclunk tag 0
I1213 09:08:27.391634   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1213 09:08:27.391703   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rerror tag 0 ename 'file not found' ecode 0
I1213 09:08:27.688271   68870 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:44148 Tclunk tag 0 fid 0
I1213 09:08:27.688320   68870 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:44148 Rclunk tag 0
I1213 09:08:27.689325   68870 main.go:127] stdlog: ufs.go:147 disconnected
I1213 09:08:27.709750   68870 out.go:179] * Unmounting /mount-9p ...
I1213 09:08:27.712831   68870 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 09:08:27.719899   68870 mount.go:180] unmount for /mount-9p ran successfully
I1213 09:08:27.720019   68870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/.mount-process: {Name:mkc53420e6c7644bca9e6b6136ed7181cd9e81ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 09:08:27.723234   68870 out.go:203] 
W1213 09:08:27.726261   68870 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1213 09:08:27.729183   68870 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (802.91s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-355809 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1213 09:38:51.888272    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-355809 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.425259695s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-355809
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-355809: (1.550120702s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-355809 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-355809 status --format={{.Host}}: exit status 7 (112.786709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-355809 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-355809 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m37.315558889s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-355809] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-355809" primary control-plane node in "kubernetes-upgrade-355809" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:39:09.603900  201245 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:39:09.604054  201245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:39:09.604067  201245 out.go:374] Setting ErrFile to fd 2...
	I1213 09:39:09.604072  201245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:39:09.604397  201245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:39:09.604847  201245 out.go:368] Setting JSON to false
	I1213 09:39:09.605822  201245 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4902,"bootTime":1765613848,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:39:09.605900  201245 start.go:143] virtualization:  
	I1213 09:39:09.609599  201245 out.go:179] * [kubernetes-upgrade-355809] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:39:09.613531  201245 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:39:09.613751  201245 notify.go:221] Checking for updates...
	I1213 09:39:09.621751  201245 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:39:09.624807  201245 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:39:09.630817  201245 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:39:09.633856  201245 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:39:09.636840  201245 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:39:09.640379  201245 config.go:182] Loaded profile config "kubernetes-upgrade-355809": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1213 09:39:09.641040  201245 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:39:09.699975  201245 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:39:09.700123  201245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:39:09.767746  201245 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:39:09.758409693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:39:09.767863  201245 docker.go:319] overlay module found
	I1213 09:39:09.770713  201245 out.go:179] * Using the docker driver based on existing profile
	I1213 09:39:09.773530  201245 start.go:309] selected driver: docker
	I1213 09:39:09.773551  201245 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-355809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-355809 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:39:09.773657  201245 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:39:09.774353  201245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:39:09.849532  201245 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:39:09.838837512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:39:09.849904  201245 cni.go:84] Creating CNI manager for ""
	I1213 09:39:09.849966  201245 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:39:09.850013  201245 start.go:353] cluster config:
	{Name:kubernetes-upgrade-355809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-355809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:39:09.853320  201245 out.go:179] * Starting "kubernetes-upgrade-355809" primary control-plane node in "kubernetes-upgrade-355809" cluster
	I1213 09:39:09.856311  201245 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 09:39:09.859160  201245 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:39:09.862161  201245 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:39:09.862212  201245 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 09:39:09.862222  201245 cache.go:65] Caching tarball of preloaded images
	I1213 09:39:09.862296  201245 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 09:39:09.862304  201245 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 09:39:09.862401  201245 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/config.json ...
	I1213 09:39:09.862581  201245 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:39:09.886377  201245 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:39:09.886396  201245 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:39:09.886409  201245 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:39:09.886436  201245 start.go:360] acquireMachinesLock for kubernetes-upgrade-355809: {Name:mkf43c6da8982d63dbb4816dcec84a4cb80a7009 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:39:09.886492  201245 start.go:364] duration metric: took 34.413µs to acquireMachinesLock for "kubernetes-upgrade-355809"
	I1213 09:39:09.886510  201245 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:39:09.886516  201245 fix.go:54] fixHost starting: 
	I1213 09:39:09.887062  201245 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-355809 --format={{.State.Status}}
	I1213 09:39:09.904380  201245 fix.go:112] recreateIfNeeded on kubernetes-upgrade-355809: state=Stopped err=<nil>
	W1213 09:39:09.904407  201245 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 09:39:09.907746  201245 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-355809" ...
	I1213 09:39:09.907823  201245 cli_runner.go:164] Run: docker start kubernetes-upgrade-355809
	I1213 09:39:10.203028  201245 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-355809 --format={{.State.Status}}
	I1213 09:39:10.223570  201245 kic.go:430] container "kubernetes-upgrade-355809" state is running.
	I1213 09:39:10.223934  201245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-355809
	I1213 09:39:10.247587  201245 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/config.json ...
	I1213 09:39:10.247821  201245 machine.go:94] provisionDockerMachine start ...
	I1213 09:39:10.247879  201245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-355809
	I1213 09:39:10.270101  201245 main.go:143] libmachine: Using SSH client type: native
	I1213 09:39:10.270442  201245 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1213 09:39:10.270451  201245 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:39:10.271296  201245 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50708->127.0.0.1:33013: read: connection reset by peer
	I1213 09:39:13.435472  201245 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-355809
	
	I1213 09:39:13.435501  201245 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-355809"
	I1213 09:39:13.435591  201245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-355809
	I1213 09:39:13.457938  201245 main.go:143] libmachine: Using SSH client type: native
	I1213 09:39:13.458265  201245 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1213 09:39:13.458283  201245 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-355809 && echo "kubernetes-upgrade-355809" | sudo tee /etc/hostname
	I1213 09:39:13.636578  201245 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-355809
	
	I1213 09:39:13.636665  201245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-355809
	I1213 09:39:13.658242  201245 main.go:143] libmachine: Using SSH client type: native
	I1213 09:39:13.658549  201245 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I1213 09:39:13.658566  201245 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-355809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-355809/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-355809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:39:13.819602  201245 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:39:13.819680  201245 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 09:39:13.819722  201245 ubuntu.go:190] setting up certificates
	I1213 09:39:13.819758  201245 provision.go:84] configureAuth start
	I1213 09:39:13.819871  201245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-355809
	I1213 09:39:13.841015  201245 provision.go:143] copyHostCerts
	I1213 09:39:13.841084  201245 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 09:39:13.841093  201245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 09:39:13.841170  201245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 09:39:13.841273  201245 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 09:39:13.841279  201245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 09:39:13.841306  201245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 09:39:13.841368  201245 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 09:39:13.841373  201245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 09:39:13.841396  201245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 09:39:13.841450  201245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-355809 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-355809 localhost minikube]
	I1213 09:39:14.162441  201245 provision.go:177] copyRemoteCerts
	I1213 09:39:14.162551  201245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:39:14.162638  201245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-355809
	I1213 09:39:14.180958  201245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/kubernetes-upgrade-355809/id_rsa Username:docker}
	I1213 09:39:14.287707  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:39:14.308730  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 09:39:14.328263  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:39:14.347974  201245 provision.go:87] duration metric: took 528.176414ms to configureAuth
	I1213 09:39:14.348048  201245 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:39:14.348249  201245 config.go:182] Loaded profile config "kubernetes-upgrade-355809": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:39:14.348285  201245 machine.go:97] duration metric: took 4.10045568s to provisionDockerMachine
	I1213 09:39:14.348310  201245 start.go:293] postStartSetup for "kubernetes-upgrade-355809" (driver="docker")
	I1213 09:39:14.348335  201245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:39:14.348424  201245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:39:14.348492  201245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-355809
	I1213 09:39:14.373400  201245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/kubernetes-upgrade-355809/id_rsa Username:docker}
	I1213 09:39:14.479981  201245 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:39:14.483876  201245 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:39:14.483901  201245 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:39:14.483913  201245 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 09:39:14.483964  201245 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 09:39:14.484042  201245 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 09:39:14.484138  201245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:39:14.492269  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:39:14.511564  201245 start.go:296] duration metric: took 163.226581ms for postStartSetup
	I1213 09:39:14.511690  201245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:39:14.511771  201245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-355809
	I1213 09:39:14.541257  201245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/kubernetes-upgrade-355809/id_rsa Username:docker}
	I1213 09:39:14.645525  201245 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:39:14.650832  201245 fix.go:56] duration metric: took 4.764308394s for fixHost
	I1213 09:39:14.650854  201245 start.go:83] releasing machines lock for "kubernetes-upgrade-355809", held for 4.764354236s
	I1213 09:39:14.650920  201245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-355809
	I1213 09:39:14.676845  201245 ssh_runner.go:195] Run: cat /version.json
	I1213 09:39:14.676851  201245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:39:14.676951  201245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-355809
	I1213 09:39:14.677006  201245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-355809
	I1213 09:39:14.713996  201245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/kubernetes-upgrade-355809/id_rsa Username:docker}
	I1213 09:39:14.721137  201245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/kubernetes-upgrade-355809/id_rsa Username:docker}
	I1213 09:39:14.831793  201245 ssh_runner.go:195] Run: systemctl --version
	I1213 09:39:14.958352  201245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:39:14.963015  201245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:39:14.963135  201245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:39:14.975290  201245 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:39:14.975354  201245 start.go:496] detecting cgroup driver to use...
	I1213 09:39:14.975393  201245 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:39:14.975450  201245 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 09:39:15.004456  201245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 09:39:15.023641  201245 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:39:15.023708  201245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:39:15.042815  201245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:39:15.057610  201245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:39:15.214086  201245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:39:15.354066  201245 docker.go:234] disabling docker service ...
	I1213 09:39:15.354180  201245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:39:15.372662  201245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:39:15.387376  201245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:39:15.542805  201245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:39:15.688738  201245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:39:15.701799  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:39:15.738845  201245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 09:39:15.762131  201245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 09:39:15.780862  201245 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 09:39:15.780923  201245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 09:39:15.791178  201245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:39:15.801412  201245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 09:39:15.811885  201245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:39:15.820034  201245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:39:15.827396  201245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 09:39:15.835383  201245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 09:39:15.843687  201245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 09:39:15.852416  201245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:39:15.859994  201245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:39:15.867571  201245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:39:16.025146  201245 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 09:39:16.225418  201245 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 09:39:16.225521  201245 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 09:39:16.230356  201245 start.go:564] Will wait 60s for crictl version
	I1213 09:39:16.230431  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:39:16.234153  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:39:16.260648  201245 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 09:39:16.260716  201245 ssh_runner.go:195] Run: containerd --version
	I1213 09:39:16.285864  201245 ssh_runner.go:195] Run: containerd --version
	I1213 09:39:16.325295  201245 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 09:39:16.343821  201245 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-355809 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:39:16.362771  201245 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 09:39:16.367277  201245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:39:16.379455  201245 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-355809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-355809 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:39:16.379641  201245 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:39:16.379705  201245 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:39:16.403706  201245 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 09:39:16.403772  201245 ssh_runner.go:195] Run: which lz4
	I1213 09:39:16.407894  201245 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 09:39:16.412260  201245 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 09:39:16.412294  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305624510 bytes)
	I1213 09:39:20.624559  201245 containerd.go:563] duration metric: took 4.216686014s to copy over tarball
	I1213 09:39:20.624647  201245 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 09:39:22.993802  201245 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.36913027s)
	I1213 09:39:22.993880  201245 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1213 09:39:22.993974  201245 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:39:23.023935  201245 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 09:39:23.023961  201245 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 09:39:23.024021  201245 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:39:23.024049  201245 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:39:23.024242  201245 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1213 09:39:23.024261  201245 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:39:23.024347  201245 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1213 09:39:23.024361  201245 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:39:23.024439  201245 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:39:23.024440  201245 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:39:23.025836  201245 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1213 09:39:23.026265  201245 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1213 09:39:23.026404  201245 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:39:23.026516  201245 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:39:23.026638  201245 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:39:23.026784  201245 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:39:23.027014  201245 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:39:23.027156  201245 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:39:23.329834  201245 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1213 09:39:23.329964  201245 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1213 09:39:23.376225  201245 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1213 09:39:23.376294  201245 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1213 09:39:23.380229  201245 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1213 09:39:23.380330  201245 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1213 09:39:23.380412  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:39:23.393962  201245 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1213 09:39:23.394085  201245 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:39:23.400492  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 09:39:23.400613  201245 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1213 09:39:23.400654  201245 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1213 09:39:23.400725  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:39:23.401331  201245 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1213 09:39:23.401413  201245 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:39:23.432387  201245 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1213 09:39:23.432465  201245 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:39:23.435282  201245 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1213 09:39:23.435324  201245 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:39:23.435368  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:39:23.458957  201245 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1213 09:39:23.459033  201245 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:39:23.465073  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 09:39:23.465106  201245 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1213 09:39:23.465083  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 09:39:23.465144  201245 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:39:23.465158  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:39:23.465175  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:39:23.465190  201245 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1213 09:39:23.465212  201245 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:39:23.465238  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:39:23.496175  201245 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1213 09:39:23.496238  201245 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:39:23.499778  201245 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1213 09:39:23.499817  201245 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:39:23.499863  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:39:23.535601  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 09:39:23.535753  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 09:39:23.535838  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:39:23.535936  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:39:23.536029  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:39:23.548995  201245 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1213 09:39:23.549089  201245 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:39:23.549172  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:39:23.549276  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:39:23.636266  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:39:23.636364  201245 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1213 09:39:23.636449  201245 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1213 09:39:23.636539  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 09:39:23.636614  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:39:23.636714  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:39:23.636762  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:39:23.636927  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:39:23.719371  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:39:23.719465  201245 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1213 09:39:23.719762  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1213 09:39:23.719573  201245 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1213 09:39:23.719618  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:39:23.719639  201245 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 09:39:23.719669  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:39:23.720062  201245 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1213 09:39:23.725514  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:39:23.781108  201245 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1213 09:39:23.781175  201245 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1213 09:39:23.840965  201245 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1213 09:39:23.841003  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1213 09:39:23.841079  201245 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 09:39:23.841158  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:39:23.841214  201245 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 09:39:23.841251  201245 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 09:39:23.988905  201245 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 09:39:24.078396  201245 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1213 09:39:24.078476  201245 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	W1213 09:39:24.335862  201245 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1213 09:39:24.335996  201245 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1213 09:39:24.336060  201245 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:39:25.152292  201245 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.073788993s)
	I1213 09:39:25.152420  201245 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1213 09:39:25.152445  201245 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:39:25.152488  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:39:25.156962  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:39:25.303191  201245 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 09:39:25.303309  201245 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1213 09:39:25.307166  201245 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1213 09:39:25.307216  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1213 09:39:25.402015  201245 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 09:39:25.402153  201245 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1213 09:39:25.934826  201245 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 09:39:25.934898  201245 cache_images.go:94] duration metric: took 2.910923014s to LoadCachedImages
	W1213 09:39:25.934970  201245 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0: no such file or directory
	I1213 09:39:25.934982  201245 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 09:39:25.935180  201245 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-355809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-355809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:39:25.935262  201245 ssh_runner.go:195] Run: sudo crictl info
	I1213 09:39:25.968568  201245 cni.go:84] Creating CNI manager for ""
	I1213 09:39:25.968588  201245 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:39:25.968602  201245 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:39:25.968626  201245 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-355809 NodeName:kubernetes-upgrade-355809 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:39:25.968736  201245 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-355809"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:39:25.968800  201245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:39:25.977638  201245 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:39:25.977716  201245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:39:25.985736  201245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1213 09:39:25.998647  201245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:39:26.016262  201245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1213 09:39:26.032318  201245 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:39:26.036870  201245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:39:26.049691  201245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:39:26.215758  201245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:39:26.233900  201245 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809 for IP: 192.168.76.2
	I1213 09:39:26.233960  201245 certs.go:195] generating shared ca certs ...
	I1213 09:39:26.233989  201245 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:39:26.234148  201245 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 09:39:26.234224  201245 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 09:39:26.234259  201245 certs.go:257] generating profile certs ...
	I1213 09:39:26.234374  201245 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/client.key
	I1213 09:39:26.234484  201245 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/apiserver.key.78c9068a
	I1213 09:39:26.234618  201245 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/proxy-client.key
	I1213 09:39:26.234759  201245 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 09:39:26.234828  201245 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 09:39:26.234865  201245 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:39:26.234926  201245 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:39:26.234989  201245 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:39:26.235035  201245 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 09:39:26.235127  201245 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:39:26.235782  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:39:26.253101  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:39:26.290104  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:39:26.314242  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 09:39:26.346406  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1213 09:39:26.364822  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:39:26.382737  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:39:26.402903  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 09:39:26.424278  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 09:39:26.448713  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 09:39:26.471357  201245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:39:26.496041  201245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:39:26.508941  201245 ssh_runner.go:195] Run: openssl version
	I1213 09:39:26.517719  201245 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 09:39:26.526217  201245 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 09:39:26.534332  201245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 09:39:26.538912  201245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 09:39:26.538995  201245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 09:39:26.580774  201245 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:39:26.588416  201245 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:39:26.595981  201245 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:39:26.603638  201245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:39:26.608110  201245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:39:26.608178  201245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:39:26.652605  201245 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:39:26.660053  201245 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 09:39:26.667361  201245 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 09:39:26.675021  201245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 09:39:26.679420  201245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 09:39:26.679481  201245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 09:39:26.721336  201245 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:39:26.729881  201245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:39:26.734207  201245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:39:26.777656  201245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:39:26.825843  201245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:39:26.872726  201245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:39:26.927718  201245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:39:26.970744  201245 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:39:27.026502  201245 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-355809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-355809 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:39:27.026593  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 09:39:27.026677  201245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:39:27.067027  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:39:27.067051  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:39:27.067056  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:39:27.067059  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:39:27.067062  201245 cri.go:89] found id: ""
	I1213 09:39:27.067120  201245 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1213 09:39:27.086960  201245 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:39:27Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1213 09:39:27.087051  201245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:39:27.096254  201245 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:39:27.096283  201245 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:39:27.096362  201245 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:39:27.105749  201245 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:39:27.106218  201245 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-355809" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:39:27.106332  201245 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-355809" cluster setting kubeconfig missing "kubernetes-upgrade-355809" context setting]
	I1213 09:39:27.106718  201245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:39:27.107338  201245 kapi.go:59] client config for kubernetes-upgrade-355809: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/client.key", CAFile:"/home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 09:39:27.107958  201245 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 09:39:27.107981  201245 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 09:39:27.108054  201245 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 09:39:27.108060  201245 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 09:39:27.108074  201245 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 09:39:27.108373  201245 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:39:27.119480  201245 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 09:38:46.370546891 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 09:39:26.026916596 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-355809"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1213 09:39:27.119508  201245 kubeadm.go:1161] stopping kube-system containers ...
	I1213 09:39:27.119563  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1213 09:39:27.119638  201245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:39:27.157100  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:39:27.157120  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:39:27.157142  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:39:27.157147  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:39:27.157150  201245 cri.go:89] found id: ""
	I1213 09:39:27.157156  201245 cri.go:252] Stopping containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:39:27.157221  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:39:27.163027  201245 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6
	I1213 09:39:27.239587  201245 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 09:39:27.266835  201245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:39:27.276262  201245 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 13 09:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 13 09:38 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 13 09:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 13 09:38 /etc/kubernetes/scheduler.conf
	
	I1213 09:39:27.276325  201245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:39:27.285536  201245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:39:27.294610  201245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:39:27.314903  201245 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:39:27.314968  201245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:39:27.335179  201245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:39:27.342752  201245 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:39:27.342813  201245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:39:27.350240  201245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:39:27.358173  201245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:39:27.431091  201245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:39:29.500655  201245 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.069534522s)
	I1213 09:39:29.500724  201245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:39:29.772234  201245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:39:29.862386  201245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 09:39:29.915663  201245 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:39:29.915741  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:30.415876  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:30.915924  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:31.416523  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:31.915898  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:32.416341  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:32.915892  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:33.415919  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:33.915871  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:34.416698  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:34.915914  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:35.415859  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:35.916438  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:36.416407  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:36.915881  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:37.416406  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:37.915881  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:38.415895  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:38.915902  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:39.416574  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:39.916378  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:40.416785  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:40.916706  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:41.416449  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:41.916150  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:42.416226  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:42.916175  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:43.416696  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:43.915879  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:44.416684  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:44.916630  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:45.416732  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:45.916427  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:46.415983  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:46.916720  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:47.416771  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:47.916764  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:48.415863  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:48.916654  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:49.415877  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:49.917333  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:50.416705  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:50.915950  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:51.416586  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:51.916655  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:52.416754  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:52.915904  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:53.416477  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:53.916666  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:54.416827  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:54.916609  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:55.416176  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:55.915889  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:56.416582  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:56.915870  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:57.416489  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:57.916547  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:58.415875  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:58.915902  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:59.416723  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:39:59.916619  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:00.415906  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:00.916827  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:01.415862  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:01.915875  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:02.416717  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:02.916615  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:03.416687  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:03.916680  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:04.416697  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:04.916752  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:05.416434  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:05.916623  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:06.416424  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:06.915886  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:07.416114  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:07.915932  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:08.416564  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:08.915797  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:09.416703  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:09.916761  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:10.416374  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:10.915843  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:11.416670  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:11.916457  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:12.416418  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:12.915826  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:13.416819  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:13.916655  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:14.416750  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:14.916120  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:15.415981  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:15.916701  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:16.415879  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:16.915960  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:17.415883  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:17.916716  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:18.416498  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:18.915910  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:19.416189  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:19.916527  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:20.416861  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:20.916643  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:21.419406  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:21.916092  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:22.415949  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:22.916721  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:23.415857  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:23.916575  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:24.416644  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:24.915859  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:25.415904  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:25.915922  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:26.415870  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:26.916501  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:27.415868  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:27.916663  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:28.415894  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:28.916521  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:29.416780  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:29.916412  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:40:29.916506  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:40:29.958029  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:29.958049  201245 cri.go:89] found id: ""
	I1213 09:40:29.958057  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:40:29.958111  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:29.962780  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:40:29.962850  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:40:30.013953  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:30.013975  201245 cri.go:89] found id: ""
	I1213 09:40:30.013984  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:40:30.014044  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:30.027273  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:40:30.027436  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:40:30.101795  201245 cri.go:89] found id: ""
	I1213 09:40:30.101820  201245 logs.go:282] 0 containers: []
	W1213 09:40:30.101828  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:40:30.101835  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:40:30.101905  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:40:30.136177  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:30.136202  201245 cri.go:89] found id: ""
	I1213 09:40:30.136211  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:40:30.136271  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:30.141166  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:40:30.141245  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:40:30.183951  201245 cri.go:89] found id: ""
	I1213 09:40:30.183973  201245 logs.go:282] 0 containers: []
	W1213 09:40:30.183982  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:40:30.183989  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:40:30.184048  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:40:30.218957  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:30.219031  201245 cri.go:89] found id: ""
	I1213 09:40:30.219052  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:40:30.219142  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:30.224508  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:40:30.224633  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:40:30.257680  201245 cri.go:89] found id: ""
	I1213 09:40:30.257761  201245 logs.go:282] 0 containers: []
	W1213 09:40:30.257784  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:40:30.257806  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:40:30.257912  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:40:30.285815  201245 cri.go:89] found id: ""
	I1213 09:40:30.285889  201245 logs.go:282] 0 containers: []
	W1213 09:40:30.285912  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:40:30.285940  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:40:30.285983  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:30.326011  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:40:30.326093  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:30.364105  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:40:30.364189  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:40:30.397848  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:40:30.397925  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:40:30.467240  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:40:30.467318  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:40:30.538278  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:40:30.538340  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:40:30.538375  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:30.580288  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:40:30.580359  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:40:30.615865  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:40:30.615912  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:40:30.643085  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:40:30.643114  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:33.199624  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:33.213783  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:40:33.213887  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:40:33.247654  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:33.247679  201245 cri.go:89] found id: ""
	I1213 09:40:33.247689  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:40:33.247809  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:33.252800  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:40:33.252947  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:40:33.301732  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:33.301803  201245 cri.go:89] found id: ""
	I1213 09:40:33.301826  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:40:33.301898  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:33.307999  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:40:33.308114  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:40:33.335742  201245 cri.go:89] found id: ""
	I1213 09:40:33.335817  201245 logs.go:282] 0 containers: []
	W1213 09:40:33.335840  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:40:33.335858  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:40:33.335926  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:40:33.371158  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:33.371231  201245 cri.go:89] found id: ""
	I1213 09:40:33.371253  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:40:33.371328  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:33.375726  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:40:33.375845  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:40:33.417829  201245 cri.go:89] found id: ""
	I1213 09:40:33.417868  201245 logs.go:282] 0 containers: []
	W1213 09:40:33.417878  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:40:33.417884  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:40:33.417983  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:40:33.450551  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:33.450619  201245 cri.go:89] found id: ""
	I1213 09:40:33.450642  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:40:33.450709  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:33.454675  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:40:33.454790  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:40:33.484824  201245 cri.go:89] found id: ""
	I1213 09:40:33.484900  201245 logs.go:282] 0 containers: []
	W1213 09:40:33.484922  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:40:33.484940  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:40:33.485024  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:40:33.538013  201245 cri.go:89] found id: ""
	I1213 09:40:33.538087  201245 logs.go:282] 0 containers: []
	W1213 09:40:33.538108  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:40:33.538153  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:40:33.538185  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:40:33.554499  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:40:33.554572  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:40:33.662773  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:40:33.662835  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:40:33.662878  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:33.698359  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:40:33.698396  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:40:33.731807  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:40:33.731839  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:40:33.793651  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:40:33.793733  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:33.847356  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:40:33.847429  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:33.890716  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:40:33.890791  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:33.922658  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:40:33.922881  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:40:36.463609  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:36.475365  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:40:36.475428  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:40:36.549434  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:36.549453  201245 cri.go:89] found id: ""
	I1213 09:40:36.549461  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:40:36.549520  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:36.567157  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:40:36.567233  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:40:36.607971  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:36.608051  201245 cri.go:89] found id: ""
	I1213 09:40:36.608061  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:40:36.608156  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:36.615939  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:40:36.616028  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:40:36.659064  201245 cri.go:89] found id: ""
	I1213 09:40:36.659085  201245 logs.go:282] 0 containers: []
	W1213 09:40:36.659093  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:40:36.659099  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:40:36.659166  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:40:36.690887  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:36.690905  201245 cri.go:89] found id: ""
	I1213 09:40:36.690914  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:40:36.690968  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:36.698297  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:40:36.698363  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:40:36.748134  201245 cri.go:89] found id: ""
	I1213 09:40:36.748155  201245 logs.go:282] 0 containers: []
	W1213 09:40:36.748163  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:40:36.748170  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:40:36.748226  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:40:36.793714  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:36.793733  201245 cri.go:89] found id: ""
	I1213 09:40:36.793742  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:40:36.793797  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:36.797634  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:40:36.797773  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:40:36.845686  201245 cri.go:89] found id: ""
	I1213 09:40:36.845706  201245 logs.go:282] 0 containers: []
	W1213 09:40:36.845715  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:40:36.845721  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:40:36.845798  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:40:36.914370  201245 cri.go:89] found id: ""
	I1213 09:40:36.914391  201245 logs.go:282] 0 containers: []
	W1213 09:40:36.914399  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:40:36.914412  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:40:36.914423  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:40:36.955276  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:40:36.955352  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:40:36.977691  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:40:36.977714  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:40:37.090695  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:40:37.090712  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:40:37.090725  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:37.147676  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:40:37.147765  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:37.213345  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:40:37.213437  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:40:37.286784  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:40:37.286811  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:40:37.372280  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:40:37.372324  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:37.433605  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:40:37.433684  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:39.998824  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:40.011676  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:40:40.011798  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:40:40.054835  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:40.054863  201245 cri.go:89] found id: ""
	I1213 09:40:40.054871  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:40:40.054928  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:40.059664  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:40:40.059751  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:40:40.101739  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:40.101806  201245 cri.go:89] found id: ""
	I1213 09:40:40.101829  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:40:40.101917  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:40.108138  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:40:40.108253  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:40:40.157180  201245 cri.go:89] found id: ""
	I1213 09:40:40.157244  201245 logs.go:282] 0 containers: []
	W1213 09:40:40.157265  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:40:40.157283  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:40:40.157369  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:40:40.213543  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:40.213609  201245 cri.go:89] found id: ""
	I1213 09:40:40.213632  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:40:40.213732  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:40.218189  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:40:40.218299  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:40:40.261468  201245 cri.go:89] found id: ""
	I1213 09:40:40.261497  201245 logs.go:282] 0 containers: []
	W1213 09:40:40.261506  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:40:40.261512  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:40:40.261570  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:40:40.297005  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:40.297027  201245 cri.go:89] found id: ""
	I1213 09:40:40.297036  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:40:40.297100  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:40.303564  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:40:40.303657  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:40:40.372718  201245 cri.go:89] found id: ""
	I1213 09:40:40.372750  201245 logs.go:282] 0 containers: []
	W1213 09:40:40.372759  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:40:40.372773  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:40:40.372850  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:40:40.408074  201245 cri.go:89] found id: ""
	I1213 09:40:40.408108  201245 logs.go:282] 0 containers: []
	W1213 09:40:40.408118  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:40:40.408133  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:40:40.408145  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:40:40.429759  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:40:40.429789  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:40:40.542682  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:40:40.542705  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:40:40.542729  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:40.678200  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:40:40.678277  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:40.735093  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:40:40.735171  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:40.775337  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:40:40.775484  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:40:40.809855  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:40:40.810010  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:40:40.866581  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:40:40.866651  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:40.934450  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:40:40.934523  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:40:43.498991  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:43.509921  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:40:43.509989  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:40:43.596167  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:43.596190  201245 cri.go:89] found id: ""
	I1213 09:40:43.596198  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:40:43.596257  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:43.604357  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:40:43.604439  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:40:43.648613  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:43.648636  201245 cri.go:89] found id: ""
	I1213 09:40:43.648644  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:40:43.648732  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:43.653282  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:40:43.653361  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:40:43.704275  201245 cri.go:89] found id: ""
	I1213 09:40:43.704300  201245 logs.go:282] 0 containers: []
	W1213 09:40:43.704309  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:40:43.704315  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:40:43.704372  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:40:43.734624  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:43.734646  201245 cri.go:89] found id: ""
	I1213 09:40:43.734655  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:40:43.734707  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:43.743862  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:40:43.743940  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:40:43.772909  201245 cri.go:89] found id: ""
	I1213 09:40:43.772934  201245 logs.go:282] 0 containers: []
	W1213 09:40:43.772943  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:40:43.772949  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:40:43.773007  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:40:43.807947  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:43.807969  201245 cri.go:89] found id: ""
	I1213 09:40:43.807977  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:40:43.808032  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:43.812075  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:40:43.812149  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:40:43.850162  201245 cri.go:89] found id: ""
	I1213 09:40:43.850186  201245 logs.go:282] 0 containers: []
	W1213 09:40:43.850195  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:40:43.850201  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:40:43.850258  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:40:43.905682  201245 cri.go:89] found id: ""
	I1213 09:40:43.905708  201245 logs.go:282] 0 containers: []
	W1213 09:40:43.905716  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:40:43.905729  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:40:43.905740  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:40:43.973125  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:40:43.973162  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:40:44.083531  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:40:44.083549  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:40:44.083563  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:44.160307  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:40:44.160339  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:44.208929  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:40:44.208961  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:44.269694  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:40:44.269727  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:44.349170  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:40:44.349201  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:40:44.388532  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:40:44.388565  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:40:44.465504  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:40:44.465528  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:40:47.007645  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:47.021860  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:40:47.021930  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:40:47.053621  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:47.053639  201245 cri.go:89] found id: ""
	I1213 09:40:47.053658  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:40:47.053714  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:47.058345  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:40:47.058416  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:40:47.107054  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:47.107073  201245 cri.go:89] found id: ""
	I1213 09:40:47.107081  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:40:47.107148  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:47.111155  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:40:47.111222  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:40:47.152527  201245 cri.go:89] found id: ""
	I1213 09:40:47.152547  201245 logs.go:282] 0 containers: []
	W1213 09:40:47.152555  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:40:47.152561  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:40:47.152618  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:40:47.186774  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:47.186793  201245 cri.go:89] found id: ""
	I1213 09:40:47.186802  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:40:47.186858  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:47.191921  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:40:47.191997  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:40:47.243065  201245 cri.go:89] found id: ""
	I1213 09:40:47.243088  201245 logs.go:282] 0 containers: []
	W1213 09:40:47.243096  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:40:47.243103  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:40:47.243174  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:40:47.292381  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:47.292402  201245 cri.go:89] found id: ""
	I1213 09:40:47.292410  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:40:47.292467  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:47.304341  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:40:47.304437  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:40:47.347671  201245 cri.go:89] found id: ""
	I1213 09:40:47.347694  201245 logs.go:282] 0 containers: []
	W1213 09:40:47.347702  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:40:47.347709  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:40:47.347787  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:40:47.409807  201245 cri.go:89] found id: ""
	I1213 09:40:47.409828  201245 logs.go:282] 0 containers: []
	W1213 09:40:47.409836  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:40:47.409850  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:40:47.409873  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:40:47.496270  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:40:47.496300  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:47.565541  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:40:47.571178  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:40:47.627001  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:40:47.627027  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:40:47.648241  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:40:47.648319  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:40:47.743960  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:40:47.744020  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:40:47.744056  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:47.786600  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:40:47.786675  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:47.831929  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:40:47.831999  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:47.886881  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:40:47.886958  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:40:50.425377  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:50.438735  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:40:50.438801  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:40:50.473977  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:50.473997  201245 cri.go:89] found id: ""
	I1213 09:40:50.474005  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:40:50.474059  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:50.477806  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:40:50.477877  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:40:50.505881  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:50.505900  201245 cri.go:89] found id: ""
	I1213 09:40:50.505909  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:40:50.505965  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:50.512399  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:40:50.512453  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:40:50.581480  201245 cri.go:89] found id: ""
	I1213 09:40:50.581517  201245 logs.go:282] 0 containers: []
	W1213 09:40:50.581525  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:40:50.581531  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:40:50.581589  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:40:50.640805  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:50.640835  201245 cri.go:89] found id: ""
	I1213 09:40:50.640843  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:40:50.640898  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:50.644998  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:40:50.645067  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:40:50.680383  201245 cri.go:89] found id: ""
	I1213 09:40:50.680405  201245 logs.go:282] 0 containers: []
	W1213 09:40:50.680414  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:40:50.680420  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:40:50.680491  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:40:50.712635  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:50.712654  201245 cri.go:89] found id: ""
	I1213 09:40:50.712662  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:40:50.712719  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:50.716558  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:40:50.716682  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:40:50.746122  201245 cri.go:89] found id: ""
	I1213 09:40:50.746145  201245 logs.go:282] 0 containers: []
	W1213 09:40:50.746153  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:40:50.746159  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:40:50.746218  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:40:50.796796  201245 cri.go:89] found id: ""
	I1213 09:40:50.796817  201245 logs.go:282] 0 containers: []
	W1213 09:40:50.796826  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:40:50.796839  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:40:50.796851  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:40:50.891083  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:40:50.891100  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:40:50.891113  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:50.943083  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:40:50.943116  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:50.999418  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:40:50.999490  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:40:51.041375  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:40:51.041410  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:40:51.110790  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:40:51.110818  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:40:51.184269  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:40:51.184308  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:51.237674  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:40:51.237745  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:51.301707  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:40:51.301776  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:40:53.833692  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:53.848877  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:40:53.848947  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:40:53.881553  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:53.881617  201245 cri.go:89] found id: ""
	I1213 09:40:53.881639  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:40:53.881734  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:53.886435  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:40:53.886564  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:40:53.918102  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:53.918171  201245 cri.go:89] found id: ""
	I1213 09:40:53.918194  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:40:53.918262  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:53.922886  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:40:53.923026  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:40:53.969696  201245 cri.go:89] found id: ""
	I1213 09:40:53.969787  201245 logs.go:282] 0 containers: []
	W1213 09:40:53.969812  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:40:53.969832  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:40:53.969908  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:40:54.005743  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:54.005816  201245 cri.go:89] found id: ""
	I1213 09:40:54.005841  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:40:54.005935  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:54.011079  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:40:54.011206  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:40:54.043022  201245 cri.go:89] found id: ""
	I1213 09:40:54.043098  201245 logs.go:282] 0 containers: []
	W1213 09:40:54.043121  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:40:54.043152  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:40:54.043232  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:40:54.073604  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:54.073674  201245 cri.go:89] found id: ""
	I1213 09:40:54.073696  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:40:54.073777  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:54.078325  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:40:54.078439  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:40:54.109840  201245 cri.go:89] found id: ""
	I1213 09:40:54.109875  201245 logs.go:282] 0 containers: []
	W1213 09:40:54.109884  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:40:54.109907  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:40:54.109988  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:40:54.146446  201245 cri.go:89] found id: ""
	I1213 09:40:54.146520  201245 logs.go:282] 0 containers: []
	W1213 09:40:54.146544  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:40:54.146570  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:40:54.146596  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:54.184663  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:40:54.184734  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:40:54.264971  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:40:54.267666  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:40:54.434793  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:40:54.434812  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:40:54.434825  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:54.516149  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:40:54.516222  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:40:54.553528  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:40:54.559060  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:40:54.610249  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:40:54.610274  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:40:54.635649  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:40:54.635673  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:54.716361  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:40:54.716433  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:57.283000  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:40:57.294609  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:40:57.294674  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:40:57.324872  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:57.324891  201245 cri.go:89] found id: ""
	I1213 09:40:57.324899  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:40:57.324962  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:57.328873  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:40:57.328944  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:40:57.358743  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:57.358763  201245 cri.go:89] found id: ""
	I1213 09:40:57.358772  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:40:57.358830  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:57.363787  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:40:57.363890  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:40:57.394248  201245 cri.go:89] found id: ""
	I1213 09:40:57.394312  201245 logs.go:282] 0 containers: []
	W1213 09:40:57.394336  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:40:57.394355  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:40:57.394438  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:40:57.427654  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:57.427679  201245 cri.go:89] found id: ""
	I1213 09:40:57.427688  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:40:57.427753  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:57.431707  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:40:57.431788  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:40:57.456867  201245 cri.go:89] found id: ""
	I1213 09:40:57.456891  201245 logs.go:282] 0 containers: []
	W1213 09:40:57.456901  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:40:57.456907  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:40:57.456967  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:40:57.481347  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:57.481370  201245 cri.go:89] found id: ""
	I1213 09:40:57.481379  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:40:57.481438  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:40:57.485337  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:40:57.485466  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:40:57.514151  201245 cri.go:89] found id: ""
	I1213 09:40:57.514175  201245 logs.go:282] 0 containers: []
	W1213 09:40:57.514184  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:40:57.514190  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:40:57.514255  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:40:57.539492  201245 cri.go:89] found id: ""
	I1213 09:40:57.539553  201245 logs.go:282] 0 containers: []
	W1213 09:40:57.539563  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:40:57.539577  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:40:57.539591  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:40:57.604580  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:40:57.604602  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:40:57.604614  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:40:57.633275  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:40:57.633309  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:40:57.693797  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:40:57.693835  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:40:57.707763  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:40:57.707790  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:40:57.751202  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:40:57.751232  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:40:57.788797  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:40:57.788871  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:40:57.825492  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:40:57.825525  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:40:57.872029  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:40:57.872064  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:00.403734  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:00.415856  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:00.415979  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:00.446606  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:00.446628  201245 cri.go:89] found id: ""
	I1213 09:41:00.446637  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:00.446692  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:00.450711  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:00.450784  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:00.480374  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:00.480399  201245 cri.go:89] found id: ""
	I1213 09:41:00.480409  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:00.480478  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:00.484846  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:00.484918  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:00.512274  201245 cri.go:89] found id: ""
	I1213 09:41:00.512300  201245 logs.go:282] 0 containers: []
	W1213 09:41:00.512310  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:00.512316  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:00.512392  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:00.541599  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:00.541621  201245 cri.go:89] found id: ""
	I1213 09:41:00.541630  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:00.541686  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:00.545732  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:00.545809  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:00.577441  201245 cri.go:89] found id: ""
	I1213 09:41:00.577471  201245 logs.go:282] 0 containers: []
	W1213 09:41:00.577481  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:00.577486  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:00.577547  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:00.604477  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:00.604500  201245 cri.go:89] found id: ""
	I1213 09:41:00.604509  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:00.604584  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:00.608611  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:00.608707  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:00.633750  201245 cri.go:89] found id: ""
	I1213 09:41:00.633776  201245 logs.go:282] 0 containers: []
	W1213 09:41:00.633785  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:00.633791  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:00.633876  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:00.661166  201245 cri.go:89] found id: ""
	I1213 09:41:00.661205  201245 logs.go:282] 0 containers: []
	W1213 09:41:00.661214  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:00.661228  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:00.661239  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:00.727620  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:00.727644  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:00.727665  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:00.758160  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:00.758191  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:00.787305  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:00.787331  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:00.847030  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:00.847063  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:00.859852  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:00.859878  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:00.902139  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:00.902171  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:00.933465  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:00.933496  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:00.986371  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:00.986402  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:03.517783  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:03.531970  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:03.532053  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:03.582133  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:03.582152  201245 cri.go:89] found id: ""
	I1213 09:41:03.582160  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:03.582214  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:03.586640  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:03.586707  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:03.621721  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:03.621739  201245 cri.go:89] found id: ""
	I1213 09:41:03.621747  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:03.621804  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:03.628285  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:03.628405  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:03.667112  201245 cri.go:89] found id: ""
	I1213 09:41:03.667247  201245 logs.go:282] 0 containers: []
	W1213 09:41:03.667288  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:03.667309  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:03.667409  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:03.701891  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:03.701914  201245 cri.go:89] found id: ""
	I1213 09:41:03.701923  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:03.702024  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:03.706214  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:03.706291  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:03.733789  201245 cri.go:89] found id: ""
	I1213 09:41:03.733815  201245 logs.go:282] 0 containers: []
	W1213 09:41:03.733824  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:03.733830  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:03.733892  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:03.764143  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:03.764165  201245 cri.go:89] found id: ""
	I1213 09:41:03.764174  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:03.764237  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:03.768283  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:03.768357  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:03.807990  201245 cri.go:89] found id: ""
	I1213 09:41:03.808017  201245 logs.go:282] 0 containers: []
	W1213 09:41:03.808026  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:03.808032  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:03.808090  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:03.836819  201245 cri.go:89] found id: ""
	I1213 09:41:03.836845  201245 logs.go:282] 0 containers: []
	W1213 09:41:03.836854  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:03.836871  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:03.836882  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:03.901570  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:03.901605  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:03.940755  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:03.940793  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:03.972518  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:03.972555  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:04.007461  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:04.007495  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:04.032594  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:04.032624  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:04.150555  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:04.150578  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:04.150591  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:04.191348  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:04.191380  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:04.232844  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:04.232878  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:06.775619  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:06.785537  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:06.785600  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:06.818182  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:06.818205  201245 cri.go:89] found id: ""
	I1213 09:41:06.818213  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:06.818266  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:06.822765  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:06.822836  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:06.863742  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:06.863764  201245 cri.go:89] found id: ""
	I1213 09:41:06.863773  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:06.863825  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:06.867758  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:06.867862  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:06.918110  201245 cri.go:89] found id: ""
	I1213 09:41:06.918135  201245 logs.go:282] 0 containers: []
	W1213 09:41:06.918144  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:06.918152  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:06.918262  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:06.973744  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:06.973767  201245 cri.go:89] found id: ""
	I1213 09:41:06.973775  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:06.973862  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:06.978076  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:06.978178  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:07.023709  201245 cri.go:89] found id: ""
	I1213 09:41:07.023735  201245 logs.go:282] 0 containers: []
	W1213 09:41:07.023744  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:07.023750  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:07.023862  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:07.070095  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:07.070118  201245 cri.go:89] found id: ""
	I1213 09:41:07.070127  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:07.070213  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:07.076958  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:07.077050  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:07.122315  201245 cri.go:89] found id: ""
	I1213 09:41:07.122341  201245 logs.go:282] 0 containers: []
	W1213 09:41:07.122349  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:07.122356  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:07.122462  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:07.162758  201245 cri.go:89] found id: ""
	I1213 09:41:07.162783  201245 logs.go:282] 0 containers: []
	W1213 09:41:07.162798  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:07.162837  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:07.162854  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:07.228903  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:07.228937  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:07.253648  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:07.253676  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:07.405986  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:07.406009  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:07.406025  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:07.455709  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:07.455739  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:07.522368  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:07.522397  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:07.561521  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:07.561560  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:07.609946  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:07.610030  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:07.674332  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:07.674572  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:10.232040  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:10.243776  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:10.243894  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:10.292396  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:10.292468  201245 cri.go:89] found id: ""
	I1213 09:41:10.292490  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:10.292603  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:10.297818  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:10.297939  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:10.336092  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:10.336161  201245 cri.go:89] found id: ""
	I1213 09:41:10.336184  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:10.336271  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:10.342901  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:10.343027  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:10.369196  201245 cri.go:89] found id: ""
	I1213 09:41:10.369222  201245 logs.go:282] 0 containers: []
	W1213 09:41:10.369230  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:10.369237  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:10.369326  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:10.397790  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:10.397811  201245 cri.go:89] found id: ""
	I1213 09:41:10.397819  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:10.397900  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:10.401591  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:10.401700  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:10.429734  201245 cri.go:89] found id: ""
	I1213 09:41:10.429759  201245 logs.go:282] 0 containers: []
	W1213 09:41:10.429768  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:10.429775  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:10.429833  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:10.475640  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:10.475713  201245 cri.go:89] found id: ""
	I1213 09:41:10.475737  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:10.475825  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:10.481039  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:10.481158  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:10.509173  201245 cri.go:89] found id: ""
	I1213 09:41:10.509247  201245 logs.go:282] 0 containers: []
	W1213 09:41:10.509283  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:10.509307  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:10.509394  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:10.541521  201245 cri.go:89] found id: ""
	I1213 09:41:10.541601  201245 logs.go:282] 0 containers: []
	W1213 09:41:10.541623  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:10.541664  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:10.541693  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:10.641472  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:10.641548  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:10.641577  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:10.701185  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:10.701263  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:10.768022  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:10.768053  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:10.820836  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:10.820870  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:10.855930  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:10.855968  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:10.923350  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:10.923388  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:10.938816  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:10.938842  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:10.991975  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:10.992008  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:13.596126  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:13.606846  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:13.606921  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:13.638110  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:13.638130  201245 cri.go:89] found id: ""
	I1213 09:41:13.638138  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:13.638197  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:13.644015  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:13.644083  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:13.673224  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:13.673242  201245 cri.go:89] found id: ""
	I1213 09:41:13.673250  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:13.673303  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:13.676997  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:13.677061  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:13.704035  201245 cri.go:89] found id: ""
	I1213 09:41:13.704057  201245 logs.go:282] 0 containers: []
	W1213 09:41:13.704065  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:13.704072  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:13.704140  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:13.736710  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:13.736779  201245 cri.go:89] found id: ""
	I1213 09:41:13.736814  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:13.736899  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:13.740998  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:13.741067  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:13.776362  201245 cri.go:89] found id: ""
	I1213 09:41:13.776383  201245 logs.go:282] 0 containers: []
	W1213 09:41:13.776392  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:13.776398  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:13.776459  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:13.803184  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:13.803203  201245 cri.go:89] found id: ""
	I1213 09:41:13.803211  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:13.803266  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:13.810110  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:13.810182  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:13.852183  201245 cri.go:89] found id: ""
	I1213 09:41:13.852205  201245 logs.go:282] 0 containers: []
	W1213 09:41:13.852213  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:13.852219  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:13.852277  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:13.888572  201245 cri.go:89] found id: ""
	I1213 09:41:13.888635  201245 logs.go:282] 0 containers: []
	W1213 09:41:13.888651  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:13.888666  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:13.888684  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:13.938567  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:13.938597  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:13.997695  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:13.997732  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:14.058878  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:14.058970  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:14.142898  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:14.142970  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:14.161437  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:14.161466  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:14.280270  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:14.280286  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:14.280302  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:14.326635  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:14.326706  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:14.369678  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:14.369710  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:16.909763  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:16.927469  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:16.927600  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:16.959583  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:16.959607  201245 cri.go:89] found id: ""
	I1213 09:41:16.959615  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:16.959674  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:16.963960  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:16.964048  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:17.005223  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:17.005244  201245 cri.go:89] found id: ""
	I1213 09:41:17.005252  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:17.005312  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:17.012211  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:17.012288  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:17.074322  201245 cri.go:89] found id: ""
	I1213 09:41:17.074347  201245 logs.go:282] 0 containers: []
	W1213 09:41:17.074355  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:17.074361  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:17.074419  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:17.146746  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:17.146769  201245 cri.go:89] found id: ""
	I1213 09:41:17.146779  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:17.146835  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:17.150989  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:17.151063  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:17.179685  201245 cri.go:89] found id: ""
	I1213 09:41:17.179708  201245 logs.go:282] 0 containers: []
	W1213 09:41:17.179717  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:17.179723  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:17.179782  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:17.205985  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:17.206003  201245 cri.go:89] found id: ""
	I1213 09:41:17.206010  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:17.206065  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:17.210316  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:17.210383  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:17.242915  201245 cri.go:89] found id: ""
	I1213 09:41:17.242936  201245 logs.go:282] 0 containers: []
	W1213 09:41:17.242944  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:17.242950  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:17.243006  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:17.274538  201245 cri.go:89] found id: ""
	I1213 09:41:17.274559  201245 logs.go:282] 0 containers: []
	W1213 09:41:17.274568  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:17.274580  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:17.274591  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:17.354492  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:17.354509  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:17.354521  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:17.389909  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:17.389985  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:17.424692  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:17.424766  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:17.455307  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:17.455340  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:17.508629  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:17.508658  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:17.568224  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:17.568258  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:17.604082  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:17.604116  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:17.636232  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:17.636262  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:20.153416  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:20.163621  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:20.163709  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:20.189856  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:20.189879  201245 cri.go:89] found id: ""
	I1213 09:41:20.189888  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:20.189961  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:20.193851  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:20.193940  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:20.221660  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:20.221683  201245 cri.go:89] found id: ""
	I1213 09:41:20.221692  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:20.221767  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:20.225599  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:20.225712  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:20.256563  201245 cri.go:89] found id: ""
	I1213 09:41:20.256585  201245 logs.go:282] 0 containers: []
	W1213 09:41:20.256595  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:20.256600  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:20.256707  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:20.283983  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:20.284005  201245 cri.go:89] found id: ""
	I1213 09:41:20.284013  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:20.284088  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:20.287915  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:20.287985  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:20.315291  201245 cri.go:89] found id: ""
	I1213 09:41:20.315324  201245 logs.go:282] 0 containers: []
	W1213 09:41:20.315333  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:20.315339  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:20.315405  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:20.344497  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:20.344520  201245 cri.go:89] found id: ""
	I1213 09:41:20.344528  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:20.344604  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:20.348511  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:20.348628  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:20.380787  201245 cri.go:89] found id: ""
	I1213 09:41:20.380812  201245 logs.go:282] 0 containers: []
	W1213 09:41:20.380821  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:20.380827  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:20.380934  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:20.405025  201245 cri.go:89] found id: ""
	I1213 09:41:20.405051  201245 logs.go:282] 0 containers: []
	W1213 09:41:20.405060  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:20.405086  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:20.405104  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:20.476362  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:20.476385  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:20.476398  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:20.529758  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:20.529794  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:20.571298  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:20.571331  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:20.605926  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:20.606112  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:20.638394  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:20.638428  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:20.701459  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:20.701495  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:20.718602  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:20.718631  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:20.759381  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:20.759412  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:23.325224  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:23.335302  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:23.335371  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:23.361255  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:23.361278  201245 cri.go:89] found id: ""
	I1213 09:41:23.361287  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:23.361347  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:23.366196  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:23.366270  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:23.390715  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:23.390777  201245 cri.go:89] found id: ""
	I1213 09:41:23.390788  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:23.390851  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:23.394860  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:23.394930  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:23.419808  201245 cri.go:89] found id: ""
	I1213 09:41:23.419832  201245 logs.go:282] 0 containers: []
	W1213 09:41:23.419841  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:23.419847  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:23.419904  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:23.447932  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:23.447952  201245 cri.go:89] found id: ""
	I1213 09:41:23.447960  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:23.448019  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:23.451721  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:23.451794  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:23.476550  201245 cri.go:89] found id: ""
	I1213 09:41:23.476575  201245 logs.go:282] 0 containers: []
	W1213 09:41:23.476584  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:23.476590  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:23.476648  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:23.505920  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:23.505947  201245 cri.go:89] found id: ""
	I1213 09:41:23.505956  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:23.506014  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:23.509974  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:23.510053  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:23.533924  201245 cri.go:89] found id: ""
	I1213 09:41:23.533950  201245 logs.go:282] 0 containers: []
	W1213 09:41:23.533959  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:23.533967  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:23.534081  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:23.569660  201245 cri.go:89] found id: ""
	I1213 09:41:23.569687  201245 logs.go:282] 0 containers: []
	W1213 09:41:23.569696  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:23.569710  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:23.569722  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:23.637838  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:23.637856  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:23.637868  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:23.673258  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:23.673290  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:23.709965  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:23.710005  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:23.744232  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:23.744265  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:23.784652  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:23.784734  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:23.901208  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:23.901246  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:23.929660  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:23.929694  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:23.997772  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:23.997808  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:26.558898  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:26.569008  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:26.569081  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:26.594041  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:26.594073  201245 cri.go:89] found id: ""
	I1213 09:41:26.594083  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:26.594148  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:26.598026  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:26.598093  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:26.623380  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:26.623404  201245 cri.go:89] found id: ""
	I1213 09:41:26.623411  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:26.623467  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:26.627366  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:26.627442  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:26.652852  201245 cri.go:89] found id: ""
	I1213 09:41:26.652877  201245 logs.go:282] 0 containers: []
	W1213 09:41:26.652886  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:26.652892  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:26.652951  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:26.677104  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:26.677126  201245 cri.go:89] found id: ""
	I1213 09:41:26.677134  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:26.677198  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:26.680999  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:26.681066  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:26.706033  201245 cri.go:89] found id: ""
	I1213 09:41:26.706117  201245 logs.go:282] 0 containers: []
	W1213 09:41:26.706141  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:26.706161  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:26.706260  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:26.732515  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:26.732535  201245 cri.go:89] found id: ""
	I1213 09:41:26.732543  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:26.732602  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:26.736526  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:26.736597  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:26.760752  201245 cri.go:89] found id: ""
	I1213 09:41:26.760776  201245 logs.go:282] 0 containers: []
	W1213 09:41:26.760786  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:26.760792  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:26.760850  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:26.792839  201245 cri.go:89] found id: ""
	I1213 09:41:26.792864  201245 logs.go:282] 0 containers: []
	W1213 09:41:26.792874  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:26.792887  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:26.792898  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:26.860718  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:26.860756  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:26.891444  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:26.891475  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:26.921237  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:26.921269  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:26.934629  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:26.934657  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:26.999090  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:26.999126  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:26.999141  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:27.050359  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:27.050395  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:27.083671  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:27.083704  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:27.118090  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:27.118122  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:29.648398  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:29.659588  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:29.659663  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:29.692583  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:29.692609  201245 cri.go:89] found id: ""
	I1213 09:41:29.692617  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:29.692687  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:29.696511  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:29.696579  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:29.728971  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:29.728994  201245 cri.go:89] found id: ""
	I1213 09:41:29.729004  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:29.729093  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:29.733305  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:29.733381  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:29.766224  201245 cri.go:89] found id: ""
	I1213 09:41:29.766251  201245 logs.go:282] 0 containers: []
	W1213 09:41:29.766261  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:29.766267  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:29.766326  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:29.869556  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:29.869582  201245 cri.go:89] found id: ""
	I1213 09:41:29.869591  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:29.869647  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:29.874070  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:29.874145  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:29.906852  201245 cri.go:89] found id: ""
	I1213 09:41:29.906880  201245 logs.go:282] 0 containers: []
	W1213 09:41:29.906891  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:29.906898  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:29.907017  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:29.939317  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:29.939342  201245 cri.go:89] found id: ""
	I1213 09:41:29.939350  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:29.939434  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:29.944065  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:29.944168  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:29.974000  201245 cri.go:89] found id: ""
	I1213 09:41:29.974027  201245 logs.go:282] 0 containers: []
	W1213 09:41:29.974035  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:29.974092  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:29.974178  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:30.007774  201245 cri.go:89] found id: ""
	I1213 09:41:30.007806  201245 logs.go:282] 0 containers: []
	W1213 09:41:30.007816  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:30.007854  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:30.007876  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:30.086400  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:30.086486  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:30.162510  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:30.162601  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:30.179368  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:30.179394  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:30.295120  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:30.295140  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:30.295154  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:30.335480  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:30.336527  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:30.382971  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:30.383094  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:30.431020  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:30.431099  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:30.464311  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:30.464383  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:33.012899  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:33.029509  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:33.029698  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:33.078272  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:33.078291  201245 cri.go:89] found id: ""
	I1213 09:41:33.078298  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:33.078377  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:33.083491  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:33.083601  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:33.118233  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:33.118305  201245 cri.go:89] found id: ""
	I1213 09:41:33.118334  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:33.118427  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:33.123522  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:33.123592  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:33.160095  201245 cri.go:89] found id: ""
	I1213 09:41:33.160117  201245 logs.go:282] 0 containers: []
	W1213 09:41:33.160131  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:33.160138  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:33.160197  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:33.187567  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:33.187586  201245 cri.go:89] found id: ""
	I1213 09:41:33.187594  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:33.187648  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:33.191766  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:33.191832  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:33.237859  201245 cri.go:89] found id: ""
	I1213 09:41:33.237934  201245 logs.go:282] 0 containers: []
	W1213 09:41:33.237959  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:33.237977  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:33.238100  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:33.277913  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:33.277984  201245 cri.go:89] found id: ""
	I1213 09:41:33.278016  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:33.278114  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:33.282450  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:33.282591  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:33.310449  201245 cri.go:89] found id: ""
	I1213 09:41:33.310517  201245 logs.go:282] 0 containers: []
	W1213 09:41:33.310553  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:33.310579  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:33.310668  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:33.342130  201245 cri.go:89] found id: ""
	I1213 09:41:33.342206  201245 logs.go:282] 0 containers: []
	W1213 09:41:33.342230  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:33.342276  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:33.342311  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:33.385086  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:33.385118  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:33.418417  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:33.418450  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:33.453751  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:33.453782  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:33.515926  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:33.515960  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:33.544097  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:33.544127  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:33.646380  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:33.646451  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:33.646477  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:33.692746  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:33.692816  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:33.745444  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:33.745517  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:36.277710  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:36.289010  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:36.289079  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:36.317185  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:36.317204  201245 cri.go:89] found id: ""
	I1213 09:41:36.317212  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:36.317271  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:36.321716  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:36.321783  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:36.349597  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:36.349617  201245 cri.go:89] found id: ""
	I1213 09:41:36.349625  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:36.349682  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:36.354045  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:36.354118  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:36.380882  201245 cri.go:89] found id: ""
	I1213 09:41:36.380903  201245 logs.go:282] 0 containers: []
	W1213 09:41:36.380912  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:36.380918  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:36.380974  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:36.409753  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:36.409771  201245 cri.go:89] found id: ""
	I1213 09:41:36.409779  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:36.409836  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:36.414030  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:36.414323  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:36.454624  201245 cri.go:89] found id: ""
	I1213 09:41:36.454699  201245 logs.go:282] 0 containers: []
	W1213 09:41:36.454722  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:36.454740  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:36.454831  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:36.489803  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:36.489875  201245 cri.go:89] found id: ""
	I1213 09:41:36.489897  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:36.489985  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:36.494150  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:36.494287  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:36.536464  201245 cri.go:89] found id: ""
	I1213 09:41:36.536539  201245 logs.go:282] 0 containers: []
	W1213 09:41:36.536563  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:36.536581  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:36.536666  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:36.571415  201245 cri.go:89] found id: ""
	I1213 09:41:36.571491  201245 logs.go:282] 0 containers: []
	W1213 09:41:36.571533  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:36.571568  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:36.571595  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:36.660486  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:36.660570  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:36.679083  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:36.679124  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:36.730826  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:36.731003  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:36.764828  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:36.764910  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:36.840174  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:36.840207  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:36.881587  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:36.881627  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:36.925216  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:36.925244  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:37.005075  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:37.005101  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:37.005116  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:39.559612  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:39.576156  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:39.576225  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:39.618322  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:39.618347  201245 cri.go:89] found id: ""
	I1213 09:41:39.618359  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:39.618423  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:39.622728  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:39.622810  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:39.651317  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:39.651342  201245 cri.go:89] found id: ""
	I1213 09:41:39.651359  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:39.651416  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:39.655867  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:39.655942  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:39.685502  201245 cri.go:89] found id: ""
	I1213 09:41:39.685538  201245 logs.go:282] 0 containers: []
	W1213 09:41:39.685547  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:39.685554  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:39.685619  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:39.722265  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:39.722289  201245 cri.go:89] found id: ""
	I1213 09:41:39.722299  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:39.722357  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:39.726803  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:39.726886  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:39.765041  201245 cri.go:89] found id: ""
	I1213 09:41:39.765073  201245 logs.go:282] 0 containers: []
	W1213 09:41:39.765082  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:39.765089  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:39.765150  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:39.815102  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:39.815131  201245 cri.go:89] found id: ""
	I1213 09:41:39.815139  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:39.815200  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:39.848110  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:39.848183  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:39.893438  201245 cri.go:89] found id: ""
	I1213 09:41:39.893468  201245 logs.go:282] 0 containers: []
	W1213 09:41:39.893477  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:39.893491  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:39.893566  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:39.930334  201245 cri.go:89] found id: ""
	I1213 09:41:39.930360  201245 logs.go:282] 0 containers: []
	W1213 09:41:39.930369  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:39.930397  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:39.930416  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:39.988788  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:39.988827  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:40.062582  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:40.062615  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:40.076428  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:40.076454  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:40.111285  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:40.111317  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:40.146130  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:40.146163  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:40.184659  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:40.184725  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:40.222307  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:40.222380  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:40.288908  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:40.288955  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:40.377981  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:42.879676  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:42.891020  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:42.891108  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:42.930035  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:42.930055  201245 cri.go:89] found id: ""
	I1213 09:41:42.930064  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:42.930132  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:42.934326  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:42.934411  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:42.965605  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:42.965635  201245 cri.go:89] found id: ""
	I1213 09:41:42.965657  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:42.965724  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:42.975982  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:42.976092  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:43.021921  201245 cri.go:89] found id: ""
	I1213 09:41:43.021962  201245 logs.go:282] 0 containers: []
	W1213 09:41:43.021972  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:43.021989  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:43.022075  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:43.056832  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:43.056858  201245 cri.go:89] found id: ""
	I1213 09:41:43.056874  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:43.056929  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:43.061136  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:43.061216  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:43.097255  201245 cri.go:89] found id: ""
	I1213 09:41:43.097293  201245 logs.go:282] 0 containers: []
	W1213 09:41:43.097303  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:43.097309  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:43.097376  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:43.133389  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:43.133423  201245 cri.go:89] found id: ""
	I1213 09:41:43.133432  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:43.133496  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:43.137778  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:43.137858  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:43.167977  201245 cri.go:89] found id: ""
	I1213 09:41:43.168001  201245 logs.go:282] 0 containers: []
	W1213 09:41:43.168010  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:43.168032  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:43.168092  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:43.194124  201245 cri.go:89] found id: ""
	I1213 09:41:43.194162  201245 logs.go:282] 0 containers: []
	W1213 09:41:43.194172  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:43.194185  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:43.194201  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:43.236770  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:43.236803  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:43.278266  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:43.278300  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:43.320632  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:43.320661  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:43.396440  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:43.396596  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:43.514699  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:43.514721  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:43.514734  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:43.598675  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:43.600023  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:43.700200  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:43.700270  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:43.758576  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:43.758756  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:46.277566  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:46.292122  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:46.292191  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:46.346542  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:46.346561  201245 cri.go:89] found id: ""
	I1213 09:41:46.346569  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:46.346625  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:46.352329  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:46.352399  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:46.391624  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:46.391643  201245 cri.go:89] found id: ""
	I1213 09:41:46.391651  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:46.391711  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:46.400052  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:46.400177  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:46.452208  201245 cri.go:89] found id: ""
	I1213 09:41:46.452229  201245 logs.go:282] 0 containers: []
	W1213 09:41:46.452237  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:46.452243  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:46.452302  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:46.491255  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:46.491274  201245 cri.go:89] found id: ""
	I1213 09:41:46.491282  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:46.491336  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:46.495360  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:46.495430  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:46.551527  201245 cri.go:89] found id: ""
	I1213 09:41:46.551548  201245 logs.go:282] 0 containers: []
	W1213 09:41:46.551557  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:46.551563  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:46.551621  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:46.600271  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:46.600289  201245 cri.go:89] found id: ""
	I1213 09:41:46.600297  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:46.600350  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:46.605705  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:46.605829  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:46.649321  201245 cri.go:89] found id: ""
	I1213 09:41:46.649342  201245 logs.go:282] 0 containers: []
	W1213 09:41:46.649350  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:46.649356  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:46.649413  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:46.712093  201245 cri.go:89] found id: ""
	I1213 09:41:46.712114  201245 logs.go:282] 0 containers: []
	W1213 09:41:46.712122  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:46.712135  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:46.712146  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:46.759936  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:46.759966  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:46.847390  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:46.847748  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:46.882942  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:46.882966  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:46.947542  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:46.947633  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:46.998841  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:46.998914  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:47.047205  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:47.047380  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:47.093665  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:47.093696  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:47.202688  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:47.202710  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:47.202725  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:49.753958  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:49.764587  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:49.764663  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:49.801613  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:49.801636  201245 cri.go:89] found id: ""
	I1213 09:41:49.801645  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:49.801698  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:49.807026  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:49.807104  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:49.845020  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:49.845040  201245 cri.go:89] found id: ""
	I1213 09:41:49.845048  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:49.845104  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:49.849220  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:49.849289  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:49.881947  201245 cri.go:89] found id: ""
	I1213 09:41:49.881968  201245 logs.go:282] 0 containers: []
	W1213 09:41:49.881977  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:49.881983  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:49.882041  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:49.915336  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:49.915412  201245 cri.go:89] found id: ""
	I1213 09:41:49.915433  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:49.915581  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:49.919316  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:49.919382  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:49.951140  201245 cri.go:89] found id: ""
	I1213 09:41:49.951163  201245 logs.go:282] 0 containers: []
	W1213 09:41:49.951172  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:49.951178  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:49.951236  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:49.989538  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:49.989563  201245 cri.go:89] found id: ""
	I1213 09:41:49.989572  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:49.989644  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:49.995289  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:49.995363  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:50.056994  201245 cri.go:89] found id: ""
	I1213 09:41:50.057020  201245 logs.go:282] 0 containers: []
	W1213 09:41:50.057029  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:50.057036  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:50.057099  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:50.122154  201245 cri.go:89] found id: ""
	I1213 09:41:50.122179  201245 logs.go:282] 0 containers: []
	W1213 09:41:50.122188  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:50.122203  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:50.122220  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:50.201513  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:50.201551  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:50.274056  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:50.274080  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:50.274092  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:50.332252  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:50.332285  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:50.350004  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:50.350035  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:50.414145  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:50.414178  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:50.456623  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:50.456657  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:50.514065  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:50.514274  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:50.552971  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:50.553048  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:53.088969  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:53.099403  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:53.099476  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:53.126150  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:53.126178  201245 cri.go:89] found id: ""
	I1213 09:41:53.126187  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:53.126240  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:53.130775  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:53.130855  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:53.157051  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:53.157077  201245 cri.go:89] found id: ""
	I1213 09:41:53.157095  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:53.157154  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:53.160945  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:53.161015  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:53.196500  201245 cri.go:89] found id: ""
	I1213 09:41:53.196524  201245 logs.go:282] 0 containers: []
	W1213 09:41:53.196533  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:53.196538  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:53.196598  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:53.226058  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:53.226081  201245 cri.go:89] found id: ""
	I1213 09:41:53.226088  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:53.226144  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:53.230403  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:53.230569  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:53.257896  201245 cri.go:89] found id: ""
	I1213 09:41:53.257919  201245 logs.go:282] 0 containers: []
	W1213 09:41:53.257927  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:53.257933  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:53.257991  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:53.285301  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:53.285324  201245 cri.go:89] found id: ""
	I1213 09:41:53.285332  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:53.285387  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:53.289148  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:53.289246  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:53.315242  201245 cri.go:89] found id: ""
	I1213 09:41:53.315271  201245 logs.go:282] 0 containers: []
	W1213 09:41:53.315281  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:53.315288  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:53.315347  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:53.343883  201245 cri.go:89] found id: ""
	I1213 09:41:53.343907  201245 logs.go:282] 0 containers: []
	W1213 09:41:53.343915  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:53.343946  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:53.343961  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:53.404553  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:53.404589  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:53.419417  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:53.419448  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:53.461002  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:53.461040  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:53.492498  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:53.492529  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:53.572971  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:53.572992  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:53.573005  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:53.623636  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:53.623678  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:53.662748  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:53.662782  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:53.693769  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:53.693802  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:56.228800  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:56.241055  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:56.241126  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:56.296372  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:56.296397  201245 cri.go:89] found id: ""
	I1213 09:41:56.296405  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:56.296465  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:56.300916  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:56.300987  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:56.344555  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:56.344578  201245 cri.go:89] found id: ""
	I1213 09:41:56.344586  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:56.344638  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:56.348662  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:56.348737  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:56.393860  201245 cri.go:89] found id: ""
	I1213 09:41:56.393883  201245 logs.go:282] 0 containers: []
	W1213 09:41:56.393892  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:56.393898  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:56.393953  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:56.448734  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:56.448753  201245 cri.go:89] found id: ""
	I1213 09:41:56.448768  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:56.448821  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:56.464051  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:56.464127  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:56.532990  201245 cri.go:89] found id: ""
	I1213 09:41:56.533013  201245 logs.go:282] 0 containers: []
	W1213 09:41:56.533021  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:56.533027  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:56.533083  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:56.593640  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:56.593665  201245 cri.go:89] found id: ""
	I1213 09:41:56.593674  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:56.593727  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:56.598254  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:56.598334  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:56.644581  201245 cri.go:89] found id: ""
	I1213 09:41:56.644610  201245 logs.go:282] 0 containers: []
	W1213 09:41:56.644619  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:56.644626  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:56.644686  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:56.688948  201245 cri.go:89] found id: ""
	I1213 09:41:56.688970  201245 logs.go:282] 0 containers: []
	W1213 09:41:56.688978  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:56.688992  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:56.689004  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:41:56.827858  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:41:56.827875  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:41:56.827892  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:56.874892  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:41:56.874966  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:56.925505  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:41:56.925579  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:56.996701  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:41:56.996774  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:41:57.041891  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:41:57.041962  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:41:57.118609  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:41:57.118699  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:41:57.136869  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:41:57.136946  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:57.187973  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:41:57.188046  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:41:59.722083  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:41:59.732409  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:41:59.732537  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:41:59.758196  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:41:59.758218  201245 cri.go:89] found id: ""
	I1213 09:41:59.758235  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:41:59.758292  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:59.762196  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:41:59.762273  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:41:59.795301  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:41:59.795325  201245 cri.go:89] found id: ""
	I1213 09:41:59.795334  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:41:59.795389  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:59.799342  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:41:59.799415  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:41:59.833735  201245 cri.go:89] found id: ""
	I1213 09:41:59.833762  201245 logs.go:282] 0 containers: []
	W1213 09:41:59.833771  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:41:59.833777  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:41:59.833841  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:41:59.861733  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:41:59.861753  201245 cri.go:89] found id: ""
	I1213 09:41:59.861761  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:41:59.861818  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:59.865723  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:41:59.865799  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:41:59.890737  201245 cri.go:89] found id: ""
	I1213 09:41:59.890762  201245 logs.go:282] 0 containers: []
	W1213 09:41:59.890770  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:41:59.890777  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:41:59.890840  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:41:59.914836  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:41:59.914858  201245 cri.go:89] found id: ""
	I1213 09:41:59.914866  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:41:59.914920  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:41:59.918639  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:41:59.918740  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:41:59.943433  201245 cri.go:89] found id: ""
	I1213 09:41:59.943457  201245 logs.go:282] 0 containers: []
	W1213 09:41:59.943471  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:41:59.943478  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:41:59.943573  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:41:59.972253  201245 cri.go:89] found id: ""
	I1213 09:41:59.972319  201245 logs.go:282] 0 containers: []
	W1213 09:41:59.972334  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:41:59.972348  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:41:59.972361  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:00.215374  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:00.215407  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:00.215434  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:00.338415  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:00.338459  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:00.379747  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:00.379784  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:00.432599  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:00.432635  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:00.468382  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:00.468415  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:00.504339  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:00.504370  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:00.564966  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:00.565002  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:00.579810  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:00.579840  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:03.114825  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:03.125706  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:03.125784  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:03.151475  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:03.151497  201245 cri.go:89] found id: ""
	I1213 09:42:03.151507  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:03.151613  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:03.155313  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:03.155383  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:03.181735  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:03.181759  201245 cri.go:89] found id: ""
	I1213 09:42:03.181768  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:03.181831  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:03.185698  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:03.185772  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:03.212294  201245 cri.go:89] found id: ""
	I1213 09:42:03.212373  201245 logs.go:282] 0 containers: []
	W1213 09:42:03.212397  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:03.212411  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:03.212476  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:03.238214  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:03.238237  201245 cri.go:89] found id: ""
	I1213 09:42:03.238246  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:03.238302  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:03.242573  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:03.242652  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:03.286675  201245 cri.go:89] found id: ""
	I1213 09:42:03.286703  201245 logs.go:282] 0 containers: []
	W1213 09:42:03.286713  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:03.286720  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:03.286782  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:03.321700  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:03.321723  201245 cri.go:89] found id: ""
	I1213 09:42:03.321732  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:03.321790  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:03.326692  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:03.326773  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:03.353697  201245 cri.go:89] found id: ""
	I1213 09:42:03.353727  201245 logs.go:282] 0 containers: []
	W1213 09:42:03.353736  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:03.353742  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:03.353801  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:03.381265  201245 cri.go:89] found id: ""
	I1213 09:42:03.381292  201245 logs.go:282] 0 containers: []
	W1213 09:42:03.381302  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:03.381315  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:03.381329  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:03.415058  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:03.415093  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:03.444141  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:03.444175  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:03.515076  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:03.515118  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:03.515132  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:03.548006  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:03.548036  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:03.581064  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:03.581093  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:03.609814  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:03.609885  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:03.670731  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:03.670777  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:03.686456  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:03.686486  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:06.220637  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:06.230788  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:06.230880  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:06.262115  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:06.262134  201245 cri.go:89] found id: ""
	I1213 09:42:06.262142  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:06.262202  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:06.268870  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:06.268946  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:06.300835  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:06.300856  201245 cri.go:89] found id: ""
	I1213 09:42:06.300865  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:06.300921  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:06.305757  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:06.305828  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:06.336018  201245 cri.go:89] found id: ""
	I1213 09:42:06.336042  201245 logs.go:282] 0 containers: []
	W1213 09:42:06.336051  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:06.336058  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:06.336117  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:06.376907  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:06.376929  201245 cri.go:89] found id: ""
	I1213 09:42:06.376937  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:06.377004  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:06.381025  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:06.381095  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:06.407782  201245 cri.go:89] found id: ""
	I1213 09:42:06.407818  201245 logs.go:282] 0 containers: []
	W1213 09:42:06.407828  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:06.407852  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:06.407939  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:06.435325  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:06.435395  201245 cri.go:89] found id: ""
	I1213 09:42:06.435417  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:06.435489  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:06.439486  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:06.439585  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:06.466838  201245 cri.go:89] found id: ""
	I1213 09:42:06.466861  201245 logs.go:282] 0 containers: []
	W1213 09:42:06.466871  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:06.466880  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:06.466938  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:06.495235  201245 cri.go:89] found id: ""
	I1213 09:42:06.495259  201245 logs.go:282] 0 containers: []
	W1213 09:42:06.495268  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:06.495285  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:06.495296  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:06.554770  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:06.554804  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:06.569285  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:06.569314  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:06.635234  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:06.635257  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:06.635270  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:06.684887  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:06.684915  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:06.718838  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:06.718872  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:06.754963  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:06.754994  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:06.784938  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:06.784968  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:06.815909  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:06.815937  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:09.348955  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:09.359099  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:09.359173  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:09.385544  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:09.385579  201245 cri.go:89] found id: ""
	I1213 09:42:09.385588  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:09.385647  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:09.390633  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:09.390710  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:09.417461  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:09.417482  201245 cri.go:89] found id: ""
	I1213 09:42:09.417491  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:09.417562  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:09.421460  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:09.421542  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:09.449550  201245 cri.go:89] found id: ""
	I1213 09:42:09.449573  201245 logs.go:282] 0 containers: []
	W1213 09:42:09.449582  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:09.449588  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:09.449646  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:09.474426  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:09.474448  201245 cri.go:89] found id: ""
	I1213 09:42:09.474458  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:09.474515  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:09.478361  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:09.478430  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:09.502068  201245 cri.go:89] found id: ""
	I1213 09:42:09.502092  201245 logs.go:282] 0 containers: []
	W1213 09:42:09.502101  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:09.502106  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:09.502170  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:09.526901  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:09.526924  201245 cri.go:89] found id: ""
	I1213 09:42:09.526933  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:09.526992  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:09.530673  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:09.530744  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:09.556224  201245 cri.go:89] found id: ""
	I1213 09:42:09.556313  201245 logs.go:282] 0 containers: []
	W1213 09:42:09.556329  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:09.556336  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:09.556394  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:09.581789  201245 cri.go:89] found id: ""
	I1213 09:42:09.581812  201245 logs.go:282] 0 containers: []
	W1213 09:42:09.581821  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:09.581839  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:09.581850  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:09.610201  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:09.610234  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:09.670781  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:09.670814  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:09.690964  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:09.690991  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:09.757753  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:09.757773  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:09.757786  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:09.805819  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:09.805849  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:09.840485  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:09.840519  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:09.873767  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:09.873797  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:09.903260  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:09.903287  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:12.441795  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:12.451982  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:12.452053  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:12.476235  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:12.476257  201245 cri.go:89] found id: ""
	I1213 09:42:12.476266  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:12.476319  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:12.480879  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:12.480945  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:12.507708  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:12.507730  201245 cri.go:89] found id: ""
	I1213 09:42:12.507739  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:12.507796  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:12.511605  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:12.511676  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:12.536603  201245 cri.go:89] found id: ""
	I1213 09:42:12.536626  201245 logs.go:282] 0 containers: []
	W1213 09:42:12.536635  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:12.536653  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:12.536713  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:12.562756  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:12.562818  201245 cri.go:89] found id: ""
	I1213 09:42:12.562842  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:12.562920  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:12.566635  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:12.566735  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:12.595120  201245 cri.go:89] found id: ""
	I1213 09:42:12.595194  201245 logs.go:282] 0 containers: []
	W1213 09:42:12.595220  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:12.595249  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:12.595328  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:12.625469  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:12.625529  201245 cri.go:89] found id: ""
	I1213 09:42:12.625551  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:12.625626  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:12.629510  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:12.629581  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:12.665118  201245 cri.go:89] found id: ""
	I1213 09:42:12.665147  201245 logs.go:282] 0 containers: []
	W1213 09:42:12.665156  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:12.665179  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:12.665253  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:12.693870  201245 cri.go:89] found id: ""
	I1213 09:42:12.693895  201245 logs.go:282] 0 containers: []
	W1213 09:42:12.693905  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:12.693919  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:12.693934  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:12.708088  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:12.708117  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:12.776905  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:12.776934  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:12.776948  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:12.810757  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:12.810784  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:12.838876  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:12.838903  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:12.872917  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:12.872950  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:12.908958  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:12.908990  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:12.952436  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:12.952465  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:12.984500  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:12.984534  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:15.554056  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:15.565006  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:15.565079  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:15.591730  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:15.591752  201245 cri.go:89] found id: ""
	I1213 09:42:15.591761  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:15.591817  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:15.595507  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:15.595615  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:15.625596  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:15.625618  201245 cri.go:89] found id: ""
	I1213 09:42:15.625626  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:15.625682  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:15.629434  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:15.629509  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:15.659641  201245 cri.go:89] found id: ""
	I1213 09:42:15.659663  201245 logs.go:282] 0 containers: []
	W1213 09:42:15.659672  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:15.659678  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:15.659739  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:15.695200  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:15.695222  201245 cri.go:89] found id: ""
	I1213 09:42:15.695231  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:15.695298  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:15.699092  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:15.699168  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:15.730464  201245 cri.go:89] found id: ""
	I1213 09:42:15.730495  201245 logs.go:282] 0 containers: []
	W1213 09:42:15.730504  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:15.730511  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:15.730573  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:15.756959  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:15.756987  201245 cri.go:89] found id: ""
	I1213 09:42:15.756996  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:15.757061  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:15.760834  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:15.760944  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:15.786430  201245 cri.go:89] found id: ""
	I1213 09:42:15.786451  201245 logs.go:282] 0 containers: []
	W1213 09:42:15.786460  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:15.786467  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:15.786525  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:15.820672  201245 cri.go:89] found id: ""
	I1213 09:42:15.820694  201245 logs.go:282] 0 containers: []
	W1213 09:42:15.820726  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:15.820748  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:15.820764  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:15.849253  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:15.849281  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:15.862840  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:15.862869  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:15.894960  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:15.894997  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:15.925766  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:15.925800  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:15.962117  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:15.962159  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:16.030917  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:16.031002  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:16.116181  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:16.116201  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:16.116217  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:16.159316  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:16.159351  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:18.694159  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:18.704415  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:18.704483  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:18.735439  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:18.735462  201245 cri.go:89] found id: ""
	I1213 09:42:18.735471  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:18.735560  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:18.739216  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:18.739289  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:18.764742  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:18.764763  201245 cri.go:89] found id: ""
	I1213 09:42:18.764771  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:18.764824  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:18.768543  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:18.768609  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:18.793136  201245 cri.go:89] found id: ""
	I1213 09:42:18.793161  201245 logs.go:282] 0 containers: []
	W1213 09:42:18.793169  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:18.793176  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:18.793252  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:18.823006  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:18.823102  201245 cri.go:89] found id: ""
	I1213 09:42:18.823126  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:18.823214  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:18.827803  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:18.827925  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:18.857646  201245 cri.go:89] found id: ""
	I1213 09:42:18.857670  201245 logs.go:282] 0 containers: []
	W1213 09:42:18.857678  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:18.857684  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:18.857740  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:18.884679  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:18.884704  201245 cri.go:89] found id: ""
	I1213 09:42:18.884713  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:18.884768  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:18.888440  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:18.888525  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:18.913208  201245 cri.go:89] found id: ""
	I1213 09:42:18.913285  201245 logs.go:282] 0 containers: []
	W1213 09:42:18.913300  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:18.913308  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:18.913368  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:18.949553  201245 cri.go:89] found id: ""
	I1213 09:42:18.949621  201245 logs.go:282] 0 containers: []
	W1213 09:42:18.949635  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:18.949654  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:18.949665  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:19.019599  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:19.019626  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:19.019640  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:19.064877  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:19.064907  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:19.132365  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:19.132464  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:19.152285  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:19.152316  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:19.188973  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:19.189012  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:19.222791  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:19.222820  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:19.254690  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:19.254722  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:19.288255  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:19.288288  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:21.832699  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:21.844166  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:21.844243  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:21.871640  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:21.871659  201245 cri.go:89] found id: ""
	I1213 09:42:21.871667  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:21.871720  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:21.875494  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:21.875653  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:21.900903  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:21.900922  201245 cri.go:89] found id: ""
	I1213 09:42:21.900930  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:21.900988  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:21.905064  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:21.905147  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:21.929844  201245 cri.go:89] found id: ""
	I1213 09:42:21.929869  201245 logs.go:282] 0 containers: []
	W1213 09:42:21.929883  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:21.929890  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:21.929949  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:21.953810  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:21.953833  201245 cri.go:89] found id: ""
	I1213 09:42:21.953841  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:21.953894  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:21.957444  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:21.957515  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:21.984257  201245 cri.go:89] found id: ""
	I1213 09:42:21.984280  201245 logs.go:282] 0 containers: []
	W1213 09:42:21.984297  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:21.984304  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:21.984360  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:22.014168  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:22.014204  201245 cri.go:89] found id: ""
	I1213 09:42:22.014214  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:22.014287  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:22.018518  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:22.018601  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:22.066079  201245 cri.go:89] found id: ""
	I1213 09:42:22.066101  201245 logs.go:282] 0 containers: []
	W1213 09:42:22.066110  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:22.066116  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:22.066184  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:22.098630  201245 cri.go:89] found id: ""
	I1213 09:42:22.098697  201245 logs.go:282] 0 containers: []
	W1213 09:42:22.098708  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:22.098723  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:22.098734  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:22.128114  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:22.128195  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:22.159548  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:22.159580  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:22.202989  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:22.203026  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:22.232374  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:22.232405  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:22.293896  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:22.293931  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:22.307070  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:22.307095  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:22.373233  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:22.373253  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:22.373266  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:22.413519  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:22.413548  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:24.944120  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:24.954270  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:24.954338  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:24.986020  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:24.986043  201245 cri.go:89] found id: ""
	I1213 09:42:24.986051  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:24.986119  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:24.989689  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:24.989760  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:25.017858  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:25.017881  201245 cri.go:89] found id: ""
	I1213 09:42:25.017890  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:25.017949  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:25.025348  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:25.025429  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:25.054044  201245 cri.go:89] found id: ""
	I1213 09:42:25.054070  201245 logs.go:282] 0 containers: []
	W1213 09:42:25.054079  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:25.054086  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:25.054143  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:25.101944  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:25.101964  201245 cri.go:89] found id: ""
	I1213 09:42:25.101973  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:25.102033  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:25.106176  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:25.106272  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:25.133433  201245 cri.go:89] found id: ""
	I1213 09:42:25.133456  201245 logs.go:282] 0 containers: []
	W1213 09:42:25.133465  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:25.133472  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:25.133530  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:25.162762  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:25.162786  201245 cri.go:89] found id: ""
	I1213 09:42:25.162797  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:25.162856  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:25.166676  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:25.166777  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:25.192087  201245 cri.go:89] found id: ""
	I1213 09:42:25.192152  201245 logs.go:282] 0 containers: []
	W1213 09:42:25.192176  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:25.192188  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:25.192261  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:25.215182  201245 cri.go:89] found id: ""
	I1213 09:42:25.215206  201245 logs.go:282] 0 containers: []
	W1213 09:42:25.215215  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:25.215228  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:25.215264  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:25.275112  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:25.275145  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:25.344581  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:25.344602  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:25.344616  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:25.379809  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:25.379839  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:25.417008  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:25.417041  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:25.449123  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:25.449153  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:25.463205  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:25.463238  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:25.514100  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:25.514137  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:25.544819  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:25.544850  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:28.077069  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:28.087437  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:28.087544  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:28.118256  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:28.118279  201245 cri.go:89] found id: ""
	I1213 09:42:28.118287  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:28.118342  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:28.122534  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:28.122619  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:28.151561  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:28.151586  201245 cri.go:89] found id: ""
	I1213 09:42:28.151594  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:28.151650  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:28.155875  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:28.155948  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:28.180869  201245 cri.go:89] found id: ""
	I1213 09:42:28.180893  201245 logs.go:282] 0 containers: []
	W1213 09:42:28.180901  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:28.180907  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:28.180967  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:28.206648  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:28.206667  201245 cri.go:89] found id: ""
	I1213 09:42:28.206676  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:28.206730  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:28.210485  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:28.210561  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:28.236348  201245 cri.go:89] found id: ""
	I1213 09:42:28.236371  201245 logs.go:282] 0 containers: []
	W1213 09:42:28.236380  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:28.236386  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:28.236445  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:28.263737  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:28.263808  201245 cri.go:89] found id: ""
	I1213 09:42:28.263831  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:28.263906  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:28.268060  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:28.268134  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:28.292483  201245 cri.go:89] found id: ""
	I1213 09:42:28.292556  201245 logs.go:282] 0 containers: []
	W1213 09:42:28.292581  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:28.292600  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:28.292691  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:28.319990  201245 cri.go:89] found id: ""
	I1213 09:42:28.320054  201245 logs.go:282] 0 containers: []
	W1213 09:42:28.320078  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:28.320103  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:28.320142  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:28.357316  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:28.357345  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:28.370389  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:28.370421  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:28.440100  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:28.440121  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:28.440140  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:28.475097  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:28.475127  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:28.511404  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:28.511435  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:28.542299  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:28.542327  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:28.571991  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:28.572025  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:28.632856  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:28.632893  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:31.166453  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:31.176550  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:31.176613  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:31.201198  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:31.201218  201245 cri.go:89] found id: ""
	I1213 09:42:31.201227  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:31.201284  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:31.205391  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:31.205459  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:31.233912  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:31.233931  201245 cri.go:89] found id: ""
	I1213 09:42:31.233939  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:31.234001  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:31.237666  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:31.237747  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:31.271083  201245 cri.go:89] found id: ""
	I1213 09:42:31.271106  201245 logs.go:282] 0 containers: []
	W1213 09:42:31.271115  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:31.271121  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:31.271181  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:31.297696  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:31.297720  201245 cri.go:89] found id: ""
	I1213 09:42:31.297728  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:31.297782  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:31.301435  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:31.301509  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:31.326519  201245 cri.go:89] found id: ""
	I1213 09:42:31.326542  201245 logs.go:282] 0 containers: []
	W1213 09:42:31.326551  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:31.326556  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:31.326619  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:31.350392  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:31.350415  201245 cri.go:89] found id: ""
	I1213 09:42:31.350425  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:31.350483  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:31.354217  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:31.354287  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:31.378427  201245 cri.go:89] found id: ""
	I1213 09:42:31.378451  201245 logs.go:282] 0 containers: []
	W1213 09:42:31.378459  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:31.378465  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:31.378529  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:31.404540  201245 cri.go:89] found id: ""
	I1213 09:42:31.404607  201245 logs.go:282] 0 containers: []
	W1213 09:42:31.404632  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:31.404653  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:31.404665  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:31.418855  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:31.418886  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:31.463038  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:31.463070  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:31.498821  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:31.498850  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:31.561764  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:31.561806  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:31.593530  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:31.593564  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:31.629165  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:31.629194  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:31.690524  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:31.690564  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:31.761949  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:31.761971  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:31.761984  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:34.292733  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:34.302888  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:34.302957  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:34.329307  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:34.329338  201245 cri.go:89] found id: ""
	I1213 09:42:34.329347  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:34.329420  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:34.333282  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:34.333359  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:34.357722  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:34.357745  201245 cri.go:89] found id: ""
	I1213 09:42:34.357753  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:34.357812  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:34.361696  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:34.361766  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:34.387467  201245 cri.go:89] found id: ""
	I1213 09:42:34.387500  201245 logs.go:282] 0 containers: []
	W1213 09:42:34.387564  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:34.387573  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:34.387638  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:34.412923  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:34.412944  201245 cri.go:89] found id: ""
	I1213 09:42:34.412953  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:34.413011  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:34.416846  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:34.416919  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:34.443610  201245 cri.go:89] found id: ""
	I1213 09:42:34.443638  201245 logs.go:282] 0 containers: []
	W1213 09:42:34.443647  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:34.443681  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:34.443765  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:34.469270  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:34.469292  201245 cri.go:89] found id: ""
	I1213 09:42:34.469300  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:34.469356  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:34.473329  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:34.473426  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:34.499421  201245 cri.go:89] found id: ""
	I1213 09:42:34.499446  201245 logs.go:282] 0 containers: []
	W1213 09:42:34.499464  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:34.499470  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:34.499553  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:34.524654  201245 cri.go:89] found id: ""
	I1213 09:42:34.524729  201245 logs.go:282] 0 containers: []
	W1213 09:42:34.524752  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:34.524794  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:34.524822  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:34.553542  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:34.553578  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:34.612340  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:34.612377  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:34.645412  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:34.645445  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:34.686741  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:34.686771  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:34.726502  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:34.726530  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:34.740253  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:34.740280  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:34.822847  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:34.822878  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:34.822891  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:34.859824  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:34.859856  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:37.390185  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:37.401413  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:37.401487  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:37.441672  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:37.441701  201245 cri.go:89] found id: ""
	I1213 09:42:37.441713  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:37.441786  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:37.445728  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:37.445798  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:37.472021  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:37.472047  201245 cri.go:89] found id: ""
	I1213 09:42:37.472056  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:37.472110  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:37.476829  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:37.476901  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:37.502452  201245 cri.go:89] found id: ""
	I1213 09:42:37.502488  201245 logs.go:282] 0 containers: []
	W1213 09:42:37.502497  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:37.502504  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:37.502567  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:37.530068  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:37.530087  201245 cri.go:89] found id: ""
	I1213 09:42:37.530105  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:37.530162  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:37.534392  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:37.534476  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:37.557115  201245 cri.go:89] found id: ""
	I1213 09:42:37.557148  201245 logs.go:282] 0 containers: []
	W1213 09:42:37.557159  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:37.557167  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:37.557233  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:37.584680  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:37.584740  201245 cri.go:89] found id: ""
	I1213 09:42:37.584755  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:37.584810  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:37.588484  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:37.588571  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:37.613474  201245 cri.go:89] found id: ""
	I1213 09:42:37.613498  201245 logs.go:282] 0 containers: []
	W1213 09:42:37.613506  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:37.613513  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:37.613595  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:37.637708  201245 cri.go:89] found id: ""
	I1213 09:42:37.637735  201245 logs.go:282] 0 containers: []
	W1213 09:42:37.637751  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:37.637764  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:37.637777  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:37.675249  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:37.675280  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:37.713362  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:37.713397  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:37.759384  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:37.759412  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:37.810159  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:37.810199  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:37.867890  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:37.867920  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:37.892829  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:37.892856  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:37.970976  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:37.970997  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:37.971010  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:38.004424  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:38.004481  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:40.586809  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:40.598952  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:40.599026  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:40.634800  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:40.634846  201245 cri.go:89] found id: ""
	I1213 09:42:40.634860  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:40.634915  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:40.639305  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:40.639397  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:40.671016  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:40.671073  201245 cri.go:89] found id: ""
	I1213 09:42:40.671083  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:40.671139  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:40.677827  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:40.677940  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:40.712369  201245 cri.go:89] found id: ""
	I1213 09:42:40.712417  201245 logs.go:282] 0 containers: []
	W1213 09:42:40.712439  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:40.712446  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:40.712542  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:40.749988  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:40.750013  201245 cri.go:89] found id: ""
	I1213 09:42:40.750070  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:40.750164  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:40.754466  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:40.754596  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:40.806364  201245 cri.go:89] found id: ""
	I1213 09:42:40.806392  201245 logs.go:282] 0 containers: []
	W1213 09:42:40.806409  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:40.806415  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:40.806487  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:40.893203  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:40.893222  201245 cri.go:89] found id: ""
	I1213 09:42:40.893231  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:40.893286  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:40.897894  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:40.897963  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:40.939900  201245 cri.go:89] found id: ""
	I1213 09:42:40.939921  201245 logs.go:282] 0 containers: []
	W1213 09:42:40.939930  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:40.939936  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:40.939999  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:40.977283  201245 cri.go:89] found id: ""
	I1213 09:42:40.977306  201245 logs.go:282] 0 containers: []
	W1213 09:42:40.977314  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:40.977329  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:40.977340  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:41.044417  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:41.044490  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:41.080540  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:41.080610  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:41.131827  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:41.131922  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:41.176530  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:41.176606  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:41.192335  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:41.192359  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:41.278055  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:41.278072  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:41.278085  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:41.323993  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:41.324075  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:41.362667  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:41.362698  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:43.913977  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:43.927314  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:43.927400  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:43.972574  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:43.972600  201245 cri.go:89] found id: ""
	I1213 09:42:43.972609  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:43.972672  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:43.977038  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:43.977112  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:44.009646  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:44.009674  201245 cri.go:89] found id: ""
	I1213 09:42:44.009682  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:44.009747  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:44.015290  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:44.015401  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:44.090505  201245 cri.go:89] found id: ""
	I1213 09:42:44.090535  201245 logs.go:282] 0 containers: []
	W1213 09:42:44.090544  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:44.090550  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:44.090610  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:44.133654  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:44.133686  201245 cri.go:89] found id: ""
	I1213 09:42:44.133694  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:44.133783  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:44.138602  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:44.138665  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:44.181813  201245 cri.go:89] found id: ""
	I1213 09:42:44.181837  201245 logs.go:282] 0 containers: []
	W1213 09:42:44.181846  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:44.181852  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:44.181913  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:44.213426  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:44.213455  201245 cri.go:89] found id: ""
	I1213 09:42:44.213464  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:44.213521  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:44.217476  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:44.217545  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:44.260557  201245 cri.go:89] found id: ""
	I1213 09:42:44.260582  201245 logs.go:282] 0 containers: []
	W1213 09:42:44.260591  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:44.260597  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:44.260722  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:44.301998  201245 cri.go:89] found id: ""
	I1213 09:42:44.302031  201245 logs.go:282] 0 containers: []
	W1213 09:42:44.302041  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:44.302072  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:44.302105  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:44.316178  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:44.316213  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:44.361077  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:44.361114  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:44.406908  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:44.406941  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:44.446599  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:44.446634  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:44.506905  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:44.506939  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:44.580227  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:44.580310  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:44.698592  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:44.698624  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:44.698654  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:44.774846  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:44.774924  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:47.381997  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:47.392084  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:47.392168  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:47.427090  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:47.427111  201245 cri.go:89] found id: ""
	I1213 09:42:47.427120  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:47.427231  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:47.436120  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:47.436200  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:47.491291  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:47.491309  201245 cri.go:89] found id: ""
	I1213 09:42:47.491317  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:47.491371  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:47.495460  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:47.495623  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:47.524076  201245 cri.go:89] found id: ""
	I1213 09:42:47.524096  201245 logs.go:282] 0 containers: []
	W1213 09:42:47.524105  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:47.524111  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:47.524165  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:47.554269  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:47.554288  201245 cri.go:89] found id: ""
	I1213 09:42:47.554296  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:47.554349  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:47.558188  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:47.558313  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:47.584504  201245 cri.go:89] found id: ""
	I1213 09:42:47.584526  201245 logs.go:282] 0 containers: []
	W1213 09:42:47.584535  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:47.584541  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:47.584600  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:47.629586  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:47.629606  201245 cri.go:89] found id: ""
	I1213 09:42:47.629615  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:47.629670  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:47.642017  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:47.642094  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:47.670000  201245 cri.go:89] found id: ""
	I1213 09:42:47.670028  201245 logs.go:282] 0 containers: []
	W1213 09:42:47.670037  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:47.670044  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:47.670108  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:47.706733  201245 cri.go:89] found id: ""
	I1213 09:42:47.706754  201245 logs.go:282] 0 containers: []
	W1213 09:42:47.706762  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:47.706777  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:47.706789  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:47.764290  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:47.764329  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:47.812549  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:47.812587  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:47.854603  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:47.854641  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:47.937899  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:47.937973  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:47.962402  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:47.962425  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:48.068032  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:48.068063  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:48.068082  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:48.102557  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:48.102588  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:48.134689  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:48.134723  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:50.691409  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:50.702137  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:50.702217  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:50.728557  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:50.728581  201245 cri.go:89] found id: ""
	I1213 09:42:50.728589  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:50.728648  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:50.732535  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:50.732610  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:50.778349  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:50.778372  201245 cri.go:89] found id: ""
	I1213 09:42:50.778381  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:50.778437  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:50.782487  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:50.782554  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:50.820342  201245 cri.go:89] found id: ""
	I1213 09:42:50.820366  201245 logs.go:282] 0 containers: []
	W1213 09:42:50.820374  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:50.820387  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:50.820445  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:50.854939  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:50.854960  201245 cri.go:89] found id: ""
	I1213 09:42:50.854968  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:50.855024  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:50.859566  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:50.859638  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:50.899341  201245 cri.go:89] found id: ""
	I1213 09:42:50.899362  201245 logs.go:282] 0 containers: []
	W1213 09:42:50.899371  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:50.899377  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:50.899435  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:50.929403  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:50.929473  201245 cri.go:89] found id: ""
	I1213 09:42:50.929495  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:50.929584  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:50.933526  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:50.933604  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:50.979608  201245 cri.go:89] found id: ""
	I1213 09:42:50.979636  201245 logs.go:282] 0 containers: []
	W1213 09:42:50.979646  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:50.979652  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:50.979715  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:51.011377  201245 cri.go:89] found id: ""
	I1213 09:42:51.011412  201245 logs.go:282] 0 containers: []
	W1213 09:42:51.011422  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:51.011446  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:51.011463  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:51.060432  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:51.060462  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:51.112050  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:51.112084  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:51.154479  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:51.154524  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:51.185577  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:51.185614  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:51.243318  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:51.243353  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:51.260130  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:51.260164  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:51.374678  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:51.374697  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:51.374709  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:51.424308  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:51.424386  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:53.980432  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:53.990795  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:53.990869  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:54.056075  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:54.056093  201245 cri.go:89] found id: ""
	I1213 09:42:54.056102  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:54.056162  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:54.060517  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:54.060585  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:54.126302  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:54.126326  201245 cri.go:89] found id: ""
	I1213 09:42:54.126334  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:54.126389  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:54.130397  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:54.130468  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:54.161819  201245 cri.go:89] found id: ""
	I1213 09:42:54.161844  201245 logs.go:282] 0 containers: []
	W1213 09:42:54.161854  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:54.161859  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:54.161919  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:54.197803  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:54.197825  201245 cri.go:89] found id: ""
	I1213 09:42:54.197833  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:54.197887  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:54.201833  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:54.201902  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:54.231936  201245 cri.go:89] found id: ""
	I1213 09:42:54.231961  201245 logs.go:282] 0 containers: []
	W1213 09:42:54.231970  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:54.231976  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:54.232031  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:54.261149  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:54.261172  201245 cri.go:89] found id: ""
	I1213 09:42:54.261181  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:54.261235  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:54.265320  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:54.265389  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:54.291966  201245 cri.go:89] found id: ""
	I1213 09:42:54.291991  201245 logs.go:282] 0 containers: []
	W1213 09:42:54.291999  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:54.292006  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:54.292064  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:54.319190  201245 cri.go:89] found id: ""
	I1213 09:42:54.319215  201245 logs.go:282] 0 containers: []
	W1213 09:42:54.319224  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:54.319240  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:54.319252  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:54.333104  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:54.333131  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:54.412814  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:54.412837  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:54.412850  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:54.468996  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:54.469031  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:54.515960  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:54.515989  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:54.573473  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:54.573506  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:54.604792  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:54.604824  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:54.665449  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:54.665475  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:54.725781  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:54.725813  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:57.263613  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:42:57.273549  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:42:57.273625  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:42:57.298769  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:57.298792  201245 cri.go:89] found id: ""
	I1213 09:42:57.298800  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:42:57.298877  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:57.302903  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:42:57.302975  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:42:57.327924  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:57.327998  201245 cri.go:89] found id: ""
	I1213 09:42:57.328020  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:42:57.328109  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:57.331860  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:42:57.331978  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:42:57.356064  201245 cri.go:89] found id: ""
	I1213 09:42:57.356088  201245 logs.go:282] 0 containers: []
	W1213 09:42:57.356097  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:42:57.356103  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:42:57.356159  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:42:57.381800  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:42:57.381870  201245 cri.go:89] found id: ""
	I1213 09:42:57.381893  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:42:57.381988  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:57.385959  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:42:57.386042  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:42:57.413355  201245 cri.go:89] found id: ""
	I1213 09:42:57.413377  201245 logs.go:282] 0 containers: []
	W1213 09:42:57.413385  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:42:57.413391  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:42:57.413497  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:42:57.438896  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:57.438919  201245 cri.go:89] found id: ""
	I1213 09:42:57.438927  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:42:57.438984  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:42:57.442483  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:42:57.442549  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:42:57.469088  201245 cri.go:89] found id: ""
	I1213 09:42:57.469113  201245 logs.go:282] 0 containers: []
	W1213 09:42:57.469122  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:42:57.469171  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:42:57.469244  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:42:57.498601  201245 cri.go:89] found id: ""
	I1213 09:42:57.498641  201245 logs.go:282] 0 containers: []
	W1213 09:42:57.498658  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:42:57.498679  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:42:57.498693  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:42:57.558353  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:42:57.558378  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:42:57.577615  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:42:57.577642  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:42:57.675410  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:42:57.675497  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:42:57.675557  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:42:57.735397  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:42:57.735703  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:42:57.797882  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:42:57.797953  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:42:57.832148  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:42:57.832229  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:42:57.888899  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:42:57.888973  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:42:57.929174  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:42:57.929248  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:00.477529  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:43:00.488289  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:43:00.488390  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:43:00.514902  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:00.514930  201245 cri.go:89] found id: ""
	I1213 09:43:00.514938  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:43:00.514998  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:00.518730  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:43:00.518795  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:43:00.543390  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:00.543409  201245 cri.go:89] found id: ""
	I1213 09:43:00.543418  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:43:00.543471  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:00.547350  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:43:00.547418  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:43:00.574704  201245 cri.go:89] found id: ""
	I1213 09:43:00.574729  201245 logs.go:282] 0 containers: []
	W1213 09:43:00.574737  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:43:00.574744  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:43:00.574803  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:43:00.606338  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:00.606361  201245 cri.go:89] found id: ""
	I1213 09:43:00.606369  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:43:00.606427  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:00.610279  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:43:00.610350  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:43:00.634062  201245 cri.go:89] found id: ""
	I1213 09:43:00.634085  201245 logs.go:282] 0 containers: []
	W1213 09:43:00.634093  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:43:00.634099  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:43:00.634164  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:43:00.661770  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:00.661831  201245 cri.go:89] found id: ""
	I1213 09:43:00.661852  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:43:00.661928  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:00.665640  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:43:00.665709  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:43:00.695129  201245 cri.go:89] found id: ""
	I1213 09:43:00.695154  201245 logs.go:282] 0 containers: []
	W1213 09:43:00.695162  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:43:00.695168  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:43:00.695226  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:43:00.719437  201245 cri.go:89] found id: ""
	I1213 09:43:00.719462  201245 logs.go:282] 0 containers: []
	W1213 09:43:00.719471  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:43:00.719486  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:43:00.719499  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:00.753271  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:43:00.753306  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:00.792469  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:43:00.792505  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:43:00.821309  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:43:00.821346  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:43:00.849826  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:43:00.849856  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:43:00.911423  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:43:00.911459  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:43:00.924198  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:43:00.924226  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:43:00.994997  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:43:00.995075  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:43:00.995107  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:01.039837  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:43:01.039910  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:03.590656  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:43:03.603942  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:43:03.604009  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:43:03.648479  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:03.648498  201245 cri.go:89] found id: ""
	I1213 09:43:03.648506  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:43:03.648558  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:03.652803  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:43:03.652868  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:43:03.689695  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:03.689766  201245 cri.go:89] found id: ""
	I1213 09:43:03.689788  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:43:03.689872  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:03.695702  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:43:03.695767  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:43:03.727804  201245 cri.go:89] found id: ""
	I1213 09:43:03.727831  201245 logs.go:282] 0 containers: []
	W1213 09:43:03.727840  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:43:03.727846  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:43:03.727903  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:43:03.761329  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:03.761347  201245 cri.go:89] found id: ""
	I1213 09:43:03.761355  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:43:03.761410  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:03.765448  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:43:03.765513  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:43:03.794078  201245 cri.go:89] found id: ""
	I1213 09:43:03.794094  201245 logs.go:282] 0 containers: []
	W1213 09:43:03.794103  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:43:03.794109  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:43:03.794157  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:43:03.823744  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:03.823763  201245 cri.go:89] found id: ""
	I1213 09:43:03.823772  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:43:03.823826  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:03.828242  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:43:03.828358  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:43:03.868431  201245 cri.go:89] found id: ""
	I1213 09:43:03.868453  201245 logs.go:282] 0 containers: []
	W1213 09:43:03.868461  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:43:03.868468  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:43:03.868526  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:43:03.914041  201245 cri.go:89] found id: ""
	I1213 09:43:03.914120  201245 logs.go:282] 0 containers: []
	W1213 09:43:03.914143  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:43:03.914183  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:43:03.914213  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:43:03.929221  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:43:03.929299  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:43:04.035328  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:43:04.035406  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:43:04.035436  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:04.113993  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:43:04.114077  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:04.174461  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:43:04.174549  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:04.223185  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:43:04.223262  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:04.262714  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:43:04.262785  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:43:04.294213  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:43:04.294259  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:43:04.363608  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:43:04.363638  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:43:06.901346  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:43:06.912359  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:43:06.912435  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:43:06.940605  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:06.940631  201245 cri.go:89] found id: ""
	I1213 09:43:06.940640  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:43:06.940693  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:06.944719  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:43:06.944800  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:43:06.976816  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:06.976841  201245 cri.go:89] found id: ""
	I1213 09:43:06.976850  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:43:06.976905  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:06.981361  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:43:06.981439  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:43:07.024908  201245 cri.go:89] found id: ""
	I1213 09:43:07.024935  201245 logs.go:282] 0 containers: []
	W1213 09:43:07.024943  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:43:07.024949  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:43:07.025003  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:43:07.064403  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:07.064428  201245 cri.go:89] found id: ""
	I1213 09:43:07.064437  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:43:07.064494  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:07.068655  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:43:07.068728  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:43:07.101360  201245 cri.go:89] found id: ""
	I1213 09:43:07.101386  201245 logs.go:282] 0 containers: []
	W1213 09:43:07.101395  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:43:07.101401  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:43:07.101461  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:43:07.164079  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:07.164116  201245 cri.go:89] found id: ""
	I1213 09:43:07.164125  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:43:07.164217  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:07.168385  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:43:07.168483  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:43:07.201834  201245 cri.go:89] found id: ""
	I1213 09:43:07.201860  201245 logs.go:282] 0 containers: []
	W1213 09:43:07.201869  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:43:07.201896  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:43:07.201999  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:43:07.228362  201245 cri.go:89] found id: ""
	I1213 09:43:07.228400  201245 logs.go:282] 0 containers: []
	W1213 09:43:07.228423  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:43:07.228459  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:43:07.228478  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:43:07.307177  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:43:07.307213  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:07.404651  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:43:07.404689  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:07.443600  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:43:07.443632  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:07.478140  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:43:07.478177  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:43:07.516482  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:43:07.516518  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:43:07.531721  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:43:07.531750  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:43:07.616033  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:43:07.616056  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:43:07.616070  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:07.664079  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:43:07.664115  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:43:10.204411  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:43:10.214990  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:43:10.215061  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:43:10.245808  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:10.245827  201245 cri.go:89] found id: ""
	I1213 09:43:10.245835  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:43:10.245897  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:10.250103  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:43:10.250166  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:43:10.311931  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:10.311953  201245 cri.go:89] found id: ""
	I1213 09:43:10.311962  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:43:10.312024  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:10.316101  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:43:10.316174  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:43:10.345602  201245 cri.go:89] found id: ""
	I1213 09:43:10.345629  201245 logs.go:282] 0 containers: []
	W1213 09:43:10.345639  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:43:10.345646  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:43:10.345706  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:43:10.379295  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:10.379316  201245 cri.go:89] found id: ""
	I1213 09:43:10.379325  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:43:10.379385  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:10.383442  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:43:10.383529  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:43:10.408471  201245 cri.go:89] found id: ""
	I1213 09:43:10.408496  201245 logs.go:282] 0 containers: []
	W1213 09:43:10.408504  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:43:10.408511  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:43:10.408567  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:43:10.435442  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:10.435464  201245 cri.go:89] found id: ""
	I1213 09:43:10.435473  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:43:10.435568  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:10.439766  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:43:10.439836  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:43:10.465602  201245 cri.go:89] found id: ""
	I1213 09:43:10.465627  201245 logs.go:282] 0 containers: []
	W1213 09:43:10.465635  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:43:10.465641  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:43:10.465701  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:43:10.497646  201245 cri.go:89] found id: ""
	I1213 09:43:10.497674  201245 logs.go:282] 0 containers: []
	W1213 09:43:10.497683  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:43:10.497696  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:43:10.497708  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:10.546696  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:43:10.546727  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:43:10.577078  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:43:10.577115  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:43:10.656019  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:43:10.656040  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:43:10.656053  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:10.699589  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:43:10.699623  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:43:10.750801  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:43:10.750830  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:43:10.820606  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:43:10.820647  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:43:10.835013  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:43:10.835048  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:10.890261  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:43:10.890336  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:13.431636  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:43:13.441976  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:43:13.442044  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:43:13.478179  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:13.478198  201245 cri.go:89] found id: ""
	I1213 09:43:13.478206  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:43:13.478261  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:13.482662  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:43:13.482735  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:43:13.521115  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:13.521135  201245 cri.go:89] found id: ""
	I1213 09:43:13.521143  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:43:13.521198  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:13.525270  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:43:13.525398  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:43:13.555638  201245 cri.go:89] found id: ""
	I1213 09:43:13.555713  201245 logs.go:282] 0 containers: []
	W1213 09:43:13.555737  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:43:13.555756  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:43:13.555844  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:43:13.595738  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:13.595808  201245 cri.go:89] found id: ""
	I1213 09:43:13.595836  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:43:13.595925  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:13.599727  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:43:13.599847  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:43:13.636972  201245 cri.go:89] found id: ""
	I1213 09:43:13.637044  201245 logs.go:282] 0 containers: []
	W1213 09:43:13.637068  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:43:13.637087  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:43:13.637177  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:43:13.666086  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:13.666113  201245 cri.go:89] found id: ""
	I1213 09:43:13.666122  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:43:13.666215  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:13.670634  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:43:13.670736  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:43:13.708338  201245 cri.go:89] found id: ""
	I1213 09:43:13.708364  201245 logs.go:282] 0 containers: []
	W1213 09:43:13.708372  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:43:13.708378  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:43:13.708441  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:43:13.752688  201245 cri.go:89] found id: ""
	I1213 09:43:13.752714  201245 logs.go:282] 0 containers: []
	W1213 09:43:13.752723  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:43:13.752740  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:43:13.752754  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:43:13.805117  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:43:13.805144  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:43:13.819505  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:43:13.819542  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:13.868709  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:43:13.868741  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:43:13.900572  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:43:13.900612  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:43:13.964853  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:43:13.964890  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:43:14.064491  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:43:14.064513  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:43:14.064526  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:14.100231  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:43:14.100265  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:14.164645  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:43:14.164678  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:16.711855  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:43:16.727957  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:43:16.728028  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:43:16.775184  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:16.775202  201245 cri.go:89] found id: ""
	I1213 09:43:16.775211  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:43:16.775264  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:16.784138  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:43:16.784207  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:43:16.821430  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:16.821448  201245 cri.go:89] found id: ""
	I1213 09:43:16.821456  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:43:16.821512  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:16.828345  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:43:16.828460  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:43:16.875304  201245 cri.go:89] found id: ""
	I1213 09:43:16.875325  201245 logs.go:282] 0 containers: []
	W1213 09:43:16.875333  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:43:16.875339  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:43:16.875397  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:43:16.921812  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:16.921874  201245 cri.go:89] found id: ""
	I1213 09:43:16.921896  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:43:16.921979  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:16.928439  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:43:16.928555  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:43:16.989191  201245 cri.go:89] found id: ""
	I1213 09:43:16.989255  201245 logs.go:282] 0 containers: []
	W1213 09:43:16.989277  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:43:16.989296  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:43:16.989385  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:43:17.071890  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:17.071957  201245 cri.go:89] found id: ""
	I1213 09:43:17.071979  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:43:17.072073  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:17.078243  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:43:17.078359  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:43:17.124684  201245 cri.go:89] found id: ""
	I1213 09:43:17.124748  201245 logs.go:282] 0 containers: []
	W1213 09:43:17.124771  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:43:17.124796  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:43:17.124880  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:43:17.192526  201245 cri.go:89] found id: ""
	I1213 09:43:17.192601  201245 logs.go:282] 0 containers: []
	W1213 09:43:17.192623  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:43:17.192648  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:43:17.192690  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:43:17.291257  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:43:17.291334  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:43:17.312595  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:43:17.312619  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:43:17.431614  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:43:17.431634  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:43:17.431646  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:17.500465  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:43:17.500546  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:17.551240  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:43:17.551322  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:17.610892  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:43:17.610962  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:43:17.677920  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:43:17.677988  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:17.737911  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:43:17.737994  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:43:20.282572  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:43:20.300445  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:43:20.300511  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:43:20.390463  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:20.390484  201245 cri.go:89] found id: ""
	I1213 09:43:20.390492  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:43:20.390551  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:20.400084  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:43:20.400155  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:43:20.435324  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:20.435386  201245 cri.go:89] found id: ""
	I1213 09:43:20.435408  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:43:20.435500  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:20.440208  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:43:20.440325  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:43:20.473559  201245 cri.go:89] found id: ""
	I1213 09:43:20.473622  201245 logs.go:282] 0 containers: []
	W1213 09:43:20.473645  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:43:20.473662  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:43:20.473747  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:43:20.514542  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:20.514604  201245 cri.go:89] found id: ""
	I1213 09:43:20.514626  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:43:20.514710  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:20.518728  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:43:20.518839  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:43:20.554247  201245 cri.go:89] found id: ""
	I1213 09:43:20.554313  201245 logs.go:282] 0 containers: []
	W1213 09:43:20.554336  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:43:20.554354  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:43:20.554483  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:43:20.585220  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:20.585282  201245 cri.go:89] found id: ""
	I1213 09:43:20.585302  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:43:20.585393  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:20.590160  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:43:20.590276  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:43:20.628533  201245 cri.go:89] found id: ""
	I1213 09:43:20.628609  201245 logs.go:282] 0 containers: []
	W1213 09:43:20.628631  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:43:20.628649  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:43:20.628738  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:43:20.673547  201245 cri.go:89] found id: ""
	I1213 09:43:20.673623  201245 logs.go:282] 0 containers: []
	W1213 09:43:20.673645  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:43:20.673670  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:43:20.673711  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:43:20.750036  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:43:20.750061  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:20.805182  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:43:20.805250  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:20.855322  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:43:20.855405  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:43:20.929310  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:43:20.929420  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:43:20.947471  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:43:20.947495  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:43:21.065898  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:43:21.065915  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:43:21.065927  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:21.147162  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:43:21.147232  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:21.206708  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:43:21.206781  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:43:23.742304  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:43:23.752130  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:43:23.752195  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:43:23.778075  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:23.778094  201245 cri.go:89] found id: ""
	I1213 09:43:23.778102  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:43:23.778155  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:23.781847  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:43:23.781920  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:43:23.806605  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:23.806624  201245 cri.go:89] found id: ""
	I1213 09:43:23.806632  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:43:23.806685  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:23.810213  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:43:23.810284  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:43:23.834041  201245 cri.go:89] found id: ""
	I1213 09:43:23.834065  201245 logs.go:282] 0 containers: []
	W1213 09:43:23.834074  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:43:23.834079  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:43:23.834137  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:43:23.860775  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:23.860839  201245 cri.go:89] found id: ""
	I1213 09:43:23.860860  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:43:23.860944  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:23.864686  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:43:23.864757  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:43:23.889985  201245 cri.go:89] found id: ""
	I1213 09:43:23.890007  201245 logs.go:282] 0 containers: []
	W1213 09:43:23.890017  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:43:23.890023  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:43:23.890080  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:43:23.917225  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:23.917244  201245 cri.go:89] found id: ""
	I1213 09:43:23.917258  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:43:23.917314  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:23.921676  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:43:23.921767  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:43:23.953158  201245 cri.go:89] found id: ""
	I1213 09:43:23.953182  201245 logs.go:282] 0 containers: []
	W1213 09:43:23.953190  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:43:23.953196  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:43:23.953272  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:43:23.979962  201245 cri.go:89] found id: ""
	I1213 09:43:23.979987  201245 logs.go:282] 0 containers: []
	W1213 09:43:23.979995  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:43:23.980038  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:43:23.980056  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:43:24.067249  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:43:24.067304  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:43:24.099178  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:43:24.099208  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:24.180119  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:43:24.180154  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:24.234617  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:43:24.234652  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:24.275278  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:43:24.275309  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:43:24.307070  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:43:24.307103  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:43:24.391158  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:43:24.391193  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:43:24.391206  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:24.427459  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:43:24.427577  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:43:26.971673  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:43:26.988374  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:43:26.988439  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:43:27.049936  201245 cri.go:89] found id: "37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:27.049956  201245 cri.go:89] found id: ""
	I1213 09:43:27.049964  201245 logs.go:282] 1 containers: [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6]
	I1213 09:43:27.050017  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:27.062963  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:43:27.063053  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:43:27.116844  201245 cri.go:89] found id: "5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:27.116878  201245 cri.go:89] found id: ""
	I1213 09:43:27.116886  201245 logs.go:282] 1 containers: [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69]
	I1213 09:43:27.116940  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:27.124105  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:43:27.124211  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:43:27.167987  201245 cri.go:89] found id: ""
	I1213 09:43:27.168013  201245 logs.go:282] 0 containers: []
	W1213 09:43:27.168022  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:43:27.168028  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:43:27.168085  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:43:27.196392  201245 cri.go:89] found id: "67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:27.196418  201245 cri.go:89] found id: ""
	I1213 09:43:27.196427  201245 logs.go:282] 1 containers: [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376]
	I1213 09:43:27.196484  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:27.201485  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:43:27.201556  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:43:27.232936  201245 cri.go:89] found id: ""
	I1213 09:43:27.232963  201245 logs.go:282] 0 containers: []
	W1213 09:43:27.232972  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:43:27.232978  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:43:27.233068  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:43:27.282901  201245 cri.go:89] found id: "1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:27.282920  201245 cri.go:89] found id: ""
	I1213 09:43:27.282928  201245 logs.go:282] 1 containers: [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d]
	I1213 09:43:27.282985  201245 ssh_runner.go:195] Run: which crictl
	I1213 09:43:27.287102  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:43:27.287171  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:43:27.316754  201245 cri.go:89] found id: ""
	I1213 09:43:27.316777  201245 logs.go:282] 0 containers: []
	W1213 09:43:27.316786  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:43:27.316792  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:43:27.316906  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:43:27.342447  201245 cri.go:89] found id: ""
	I1213 09:43:27.342470  201245 logs.go:282] 0 containers: []
	W1213 09:43:27.342478  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:43:27.342495  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:43:27.342527  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:43:27.406873  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:43:27.406905  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:43:27.426044  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:43:27.426115  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:43:27.530702  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:43:27.530720  201245 logs.go:123] Gathering logs for kube-apiserver [37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6] ...
	I1213 09:43:27.530733  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6"
	I1213 09:43:27.574688  201245 logs.go:123] Gathering logs for kube-scheduler [67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376] ...
	I1213 09:43:27.574718  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376"
	I1213 09:43:27.626358  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:43:27.626395  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:43:27.661606  201245 logs.go:123] Gathering logs for etcd [5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69] ...
	I1213 09:43:27.661641  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69"
	I1213 09:43:27.718624  201245 logs.go:123] Gathering logs for kube-controller-manager [1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d] ...
	I1213 09:43:27.718655  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d"
	I1213 09:43:27.750564  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:43:27.750594  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:43:30.287451  201245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:43:30.303087  201245 kubeadm.go:602] duration metric: took 4m3.206794263s to restartPrimaryControlPlane
	W1213 09:43:30.303151  201245 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 09:43:30.303207  201245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 09:43:30.814949  201245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:43:30.829599  201245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:43:30.839717  201245 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:43:30.839775  201245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:43:30.851971  201245 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:43:30.851987  201245 kubeadm.go:158] found existing configuration files:
	
	I1213 09:43:30.852035  201245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:43:30.864680  201245 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:43:30.864789  201245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:43:30.873068  201245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:43:30.883905  201245 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:43:30.883969  201245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:43:30.893199  201245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:43:30.902469  201245 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:43:30.902551  201245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:43:30.910605  201245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:43:30.919489  201245 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:43:30.919625  201245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:43:30.927639  201245 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:43:31.085654  201245 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:43:31.086076  201245 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:43:31.161830  201245 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:47:43.117375  201245 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 09:47:43.117411  201245 kubeadm.go:319] 
	I1213 09:47:43.117478  201245 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:47:43.121180  201245 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:47:43.121245  201245 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:47:43.121340  201245 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:47:43.121399  201245 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 09:47:43.121441  201245 kubeadm.go:319] OS: Linux
	I1213 09:47:43.121487  201245 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:47:43.121540  201245 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:47:43.121591  201245 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:47:43.121643  201245 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:47:43.121696  201245 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:47:43.121748  201245 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:47:43.121794  201245 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:47:43.121842  201245 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:47:43.121888  201245 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:47:43.121962  201245 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:47:43.122058  201245 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:47:43.122159  201245 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:47:43.122231  201245 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:47:43.125374  201245 out.go:252]   - Generating certificates and keys ...
	I1213 09:47:43.125474  201245 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:47:43.125540  201245 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:47:43.125612  201245 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:47:43.125668  201245 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:47:43.125734  201245 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:47:43.125785  201245 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:47:43.125844  201245 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:47:43.125912  201245 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:47:43.125983  201245 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:47:43.126051  201245 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:47:43.126087  201245 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:47:43.126139  201245 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:47:43.126187  201245 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:47:43.126241  201245 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:47:43.126294  201245 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:47:43.126353  201245 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:47:43.126405  201245 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:47:43.126508  201245 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:47:43.126571  201245 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:47:43.129316  201245 out.go:252]   - Booting up control plane ...
	I1213 09:47:43.129424  201245 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:47:43.129530  201245 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:47:43.129632  201245 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:47:43.129739  201245 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:47:43.129835  201245 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:47:43.129941  201245 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:47:43.130049  201245 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:47:43.130101  201245 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:47:43.130246  201245 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:47:43.130352  201245 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:47:43.130416  201245 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000659405s
	I1213 09:47:43.130426  201245 kubeadm.go:319] 
	I1213 09:47:43.130495  201245 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:47:43.130545  201245 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:47:43.130688  201245 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:47:43.130701  201245 kubeadm.go:319] 
	I1213 09:47:43.130813  201245 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:47:43.130855  201245 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:47:43.130897  201245 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:47:43.130965  201245 kubeadm.go:319] 
	W1213 09:47:43.131015  201245 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000659405s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000659405s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 09:47:43.131097  201245 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 09:47:43.537558  201245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:47:43.551180  201245 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:47:43.551248  201245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:47:43.559461  201245 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:47:43.559484  201245 kubeadm.go:158] found existing configuration files:
	
	I1213 09:47:43.559563  201245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:47:43.567358  201245 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:47:43.567423  201245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:47:43.574866  201245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:47:43.582599  201245 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:47:43.582662  201245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:47:43.590226  201245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:47:43.597940  201245 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:47:43.598004  201245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:47:43.605299  201245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:47:43.613556  201245 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:47:43.613659  201245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:47:43.621421  201245 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:47:43.665662  201245 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:47:43.665795  201245 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:47:43.733799  201245 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:47:43.733869  201245 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 09:47:43.733909  201245 kubeadm.go:319] OS: Linux
	I1213 09:47:43.733954  201245 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:47:43.734010  201245 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:47:43.734059  201245 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:47:43.734110  201245 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:47:43.734159  201245 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:47:43.734208  201245 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:47:43.734254  201245 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:47:43.734303  201245 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:47:43.734350  201245 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:47:43.800585  201245 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:47:43.800791  201245 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:47:43.800931  201245 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:47:43.806371  201245 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:47:43.811763  201245 out.go:252]   - Generating certificates and keys ...
	I1213 09:47:43.811930  201245 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:47:43.812042  201245 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:47:43.812152  201245 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:47:43.812251  201245 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:47:43.812351  201245 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:47:43.812435  201245 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:47:43.812529  201245 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:47:43.812628  201245 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:47:43.812734  201245 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:47:43.812837  201245 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:47:43.812901  201245 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:47:43.812990  201245 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:47:44.214563  201245 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:47:44.493526  201245 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:47:44.783121  201245 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:47:45.181956  201245 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:47:46.221699  201245 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:47:46.222485  201245 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:47:46.225195  201245 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:47:46.228512  201245 out.go:252]   - Booting up control plane ...
	I1213 09:47:46.228628  201245 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:47:46.228762  201245 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:47:46.228832  201245 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:47:46.249249  201245 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:47:46.249352  201245 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:47:46.256740  201245 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:47:46.257061  201245 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:47:46.257109  201245 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:47:46.395935  201245 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:47:46.396066  201245 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:51:46.392677  201245 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000234607s
	I1213 09:51:46.392720  201245 kubeadm.go:319] 
	I1213 09:51:46.392776  201245 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:51:46.392812  201245 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:51:46.392915  201245 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:51:46.392926  201245 kubeadm.go:319] 
	I1213 09:51:46.393024  201245 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:51:46.393060  201245 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:51:46.393090  201245 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:51:46.393098  201245 kubeadm.go:319] 
	I1213 09:51:46.397243  201245 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:51:46.397677  201245 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:51:46.397793  201245 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:51:46.398065  201245 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 09:51:46.398075  201245 kubeadm.go:319] 
	I1213 09:51:46.398144  201245 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:51:46.398201  201245 kubeadm.go:403] duration metric: took 12m19.37171003s to StartCluster
	I1213 09:51:46.398239  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:51:46.398302  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:51:46.424143  201245 cri.go:89] found id: ""
	I1213 09:51:46.424166  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.424180  201245 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:51:46.424186  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:51:46.424254  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:51:46.452320  201245 cri.go:89] found id: ""
	I1213 09:51:46.452342  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.452350  201245 logs.go:284] No container was found matching "etcd"
	I1213 09:51:46.452372  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:51:46.452429  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:51:46.478830  201245 cri.go:89] found id: ""
	I1213 09:51:46.478852  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.478860  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:51:46.478866  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:51:46.478935  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:51:46.506032  201245 cri.go:89] found id: ""
	I1213 09:51:46.506054  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.506063  201245 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:51:46.506069  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:51:46.506128  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:51:46.533342  201245 cri.go:89] found id: ""
	I1213 09:51:46.533365  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.533374  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:51:46.533380  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:51:46.533442  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:51:46.560163  201245 cri.go:89] found id: ""
	I1213 09:51:46.560251  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.560275  201245 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:51:46.560299  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:51:46.560380  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:51:46.586972  201245 cri.go:89] found id: ""
	I1213 09:51:46.587000  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.587009  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:51:46.587015  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:51:46.587084  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:51:46.612430  201245 cri.go:89] found id: ""
	I1213 09:51:46.612470  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.612479  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:51:46.612489  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:51:46.612500  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:51:46.672952  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:51:46.672991  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:51:46.686821  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:51:46.686850  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:51:46.754373  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:51:46.754403  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:51:46.754416  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:51:46.798554  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:51:46.798588  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 09:51:46.827488  201245 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000234607s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 09:51:46.827565  201245 out.go:285] * 
	* 
	W1213 09:51:46.827620  201245 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000234607s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000234607s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:51:46.827639  201245 out.go:285] * 
	* 
	W1213 09:51:46.829784  201245 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:51:46.836423  201245 out.go:203] 
	W1213 09:51:46.839750  201245 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000234607s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000234607s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:51:46.839797  201245 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 09:51:46.839820  201245 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 09:51:46.842987  201245 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-355809 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-355809 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-355809 version --output=json: exit status 1 (76.605714ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-13 09:51:47.530128988 +0000 UTC m=+4959.106143777
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-355809
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-355809:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b0bda50885973cb52b267e1ab4beb193278fd4a5f09cdc22459f3c0219695d0",
	        "Created": "2025-12-13T09:38:37.216719409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 201449,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:39:09.941524057Z",
	            "FinishedAt": "2025-12-13T09:39:08.839802415Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/9b0bda50885973cb52b267e1ab4beb193278fd4a5f09cdc22459f3c0219695d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b0bda50885973cb52b267e1ab4beb193278fd4a5f09cdc22459f3c0219695d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b0bda50885973cb52b267e1ab4beb193278fd4a5f09cdc22459f3c0219695d0/hosts",
	        "LogPath": "/var/lib/docker/containers/9b0bda50885973cb52b267e1ab4beb193278fd4a5f09cdc22459f3c0219695d0/9b0bda50885973cb52b267e1ab4beb193278fd4a5f09cdc22459f3c0219695d0-json.log",
	        "Name": "/kubernetes-upgrade-355809",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-355809:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-355809",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b0bda50885973cb52b267e1ab4beb193278fd4a5f09cdc22459f3c0219695d0",
	                "LowerDir": "/var/lib/docker/overlay2/c2fa4f63eb771fd4d9ffe535cdd23f6faafc9c2c396eedb07ee51a18144371c6-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c2fa4f63eb771fd4d9ffe535cdd23f6faafc9c2c396eedb07ee51a18144371c6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c2fa4f63eb771fd4d9ffe535cdd23f6faafc9c2c396eedb07ee51a18144371c6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c2fa4f63eb771fd4d9ffe535cdd23f6faafc9c2c396eedb07ee51a18144371c6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-355809",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-355809/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-355809",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-355809",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-355809",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e1defd6de8404f70c15f13891860565b183e7fb5f60b7de29d77a49d9962d1a7",
	            "SandboxKey": "/var/run/docker/netns/e1defd6de840",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-355809": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:46:c0:0a:9e:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1c52282e2fe467d37f7cb27541fcd95db125f24ff377d9a5246e739a73261912",
	                    "EndpointID": "a878cf71834d8c09d0de3a24cb79c27d1f54c1c056535bcdee87130a61ec4df7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-355809",
	                        "9b0bda508859"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-355809 -n kubernetes-upgrade-355809
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-355809 -n kubernetes-upgrade-355809: exit status 2 (346.282091ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-355809 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-324081 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-324081            │ jenkins │ v1.37.0 │ 13 Dec 25 09:43 UTC │                     │
	│ ssh     │ -p cilium-324081 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-324081            │ jenkins │ v1.37.0 │ 13 Dec 25 09:43 UTC │                     │
	│ ssh     │ -p cilium-324081 sudo crio config                                                                                                                                                                                                                   │ cilium-324081            │ jenkins │ v1.37.0 │ 13 Dec 25 09:43 UTC │                     │
	│ delete  │ -p cilium-324081                                                                                                                                                                                                                                    │ cilium-324081            │ jenkins │ v1.37.0 │ 13 Dec 25 09:43 UTC │ 13 Dec 25 09:43 UTC │
	│ start   │ -p force-systemd-env-319338 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-319338 │ jenkins │ v1.37.0 │ 13 Dec 25 09:43 UTC │ 13 Dec 25 09:44 UTC │
	│ ssh     │ force-systemd-env-319338 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-319338 │ jenkins │ v1.37.0 │ 13 Dec 25 09:44 UTC │ 13 Dec 25 09:44 UTC │
	│ delete  │ -p force-systemd-env-319338                                                                                                                                                                                                                         │ force-systemd-env-319338 │ jenkins │ v1.37.0 │ 13 Dec 25 09:44 UTC │ 13 Dec 25 09:44 UTC │
	│ start   │ -p cert-expiration-482836 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-482836   │ jenkins │ v1.37.0 │ 13 Dec 25 09:44 UTC │ 13 Dec 25 09:44 UTC │
	│ start   │ -p cert-expiration-482836 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-482836   │ jenkins │ v1.37.0 │ 13 Dec 25 09:47 UTC │ 13 Dec 25 09:47 UTC │
	│ delete  │ -p cert-expiration-482836                                                                                                                                                                                                                           │ cert-expiration-482836   │ jenkins │ v1.37.0 │ 13 Dec 25 09:47 UTC │ 13 Dec 25 09:47 UTC │
	│ start   │ -p cert-options-993197 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-993197      │ jenkins │ v1.37.0 │ 13 Dec 25 09:47 UTC │ 13 Dec 25 09:48 UTC │
	│ ssh     │ cert-options-993197 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-993197      │ jenkins │ v1.37.0 │ 13 Dec 25 09:48 UTC │ 13 Dec 25 09:48 UTC │
	│ ssh     │ -p cert-options-993197 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-993197      │ jenkins │ v1.37.0 │ 13 Dec 25 09:48 UTC │ 13 Dec 25 09:48 UTC │
	│ delete  │ -p cert-options-993197                                                                                                                                                                                                                              │ cert-options-993197      │ jenkins │ v1.37.0 │ 13 Dec 25 09:48 UTC │ 13 Dec 25 09:48 UTC │
	│ start   │ -p old-k8s-version-640993 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-640993   │ jenkins │ v1.37.0 │ 13 Dec 25 09:48 UTC │ 13 Dec 25 09:49 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-640993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-640993   │ jenkins │ v1.37.0 │ 13 Dec 25 09:49 UTC │ 13 Dec 25 09:49 UTC │
	│ stop    │ -p old-k8s-version-640993 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-640993   │ jenkins │ v1.37.0 │ 13 Dec 25 09:49 UTC │ 13 Dec 25 09:49 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-640993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-640993   │ jenkins │ v1.37.0 │ 13 Dec 25 09:49 UTC │ 13 Dec 25 09:49 UTC │
	│ start   │ -p old-k8s-version-640993 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-640993   │ jenkins │ v1.37.0 │ 13 Dec 25 09:49 UTC │ 13 Dec 25 09:50 UTC │
	│ image   │ old-k8s-version-640993 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-640993   │ jenkins │ v1.37.0 │ 13 Dec 25 09:50 UTC │ 13 Dec 25 09:50 UTC │
	│ pause   │ -p old-k8s-version-640993 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-640993   │ jenkins │ v1.37.0 │ 13 Dec 25 09:50 UTC │ 13 Dec 25 09:50 UTC │
	│ unpause │ -p old-k8s-version-640993 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-640993   │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ delete  │ -p old-k8s-version-640993                                                                                                                                                                                                                           │ old-k8s-version-640993   │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ delete  │ -p old-k8s-version-640993                                                                                                                                                                                                                           │ old-k8s-version-640993   │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                        │ embed-certs-238987       │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:51:05
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:51:05.432852  250842 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:51:05.432984  250842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:51:05.432993  250842 out.go:374] Setting ErrFile to fd 2...
	I1213 09:51:05.432999  250842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:51:05.433271  250842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:51:05.433687  250842 out.go:368] Setting JSON to false
	I1213 09:51:05.434524  250842 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5618,"bootTime":1765613848,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:51:05.434588  250842 start.go:143] virtualization:  
	I1213 09:51:05.438662  250842 out.go:179] * [embed-certs-238987] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:51:05.443082  250842 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:51:05.443176  250842 notify.go:221] Checking for updates...
	I1213 09:51:05.449517  250842 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:51:05.452577  250842 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:51:05.455667  250842 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:51:05.458733  250842 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:51:05.461705  250842 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:51:05.465232  250842 config.go:182] Loaded profile config "kubernetes-upgrade-355809": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:51:05.465342  250842 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:51:05.489131  250842 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:51:05.489251  250842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:51:05.560094  250842 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:51:05.550797329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:51:05.560195  250842 docker.go:319] overlay module found
	I1213 09:51:05.563403  250842 out.go:179] * Using the docker driver based on user configuration
	I1213 09:51:05.566240  250842 start.go:309] selected driver: docker
	I1213 09:51:05.566264  250842 start.go:927] validating driver "docker" against <nil>
	I1213 09:51:05.566277  250842 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:51:05.566990  250842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:51:05.633422  250842 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:51:05.624216485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:51:05.633577  250842 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 09:51:05.633805  250842 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:51:05.636830  250842 out.go:179] * Using Docker driver with root privileges
	I1213 09:51:05.639725  250842 cni.go:84] Creating CNI manager for ""
	I1213 09:51:05.639790  250842 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:51:05.639804  250842 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 09:51:05.639879  250842 start.go:353] cluster config:
	{Name:embed-certs-238987 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-238987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:51:05.642958  250842 out.go:179] * Starting "embed-certs-238987" primary control-plane node in "embed-certs-238987" cluster
	I1213 09:51:05.645849  250842 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 09:51:05.648778  250842 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:51:05.651730  250842 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 09:51:05.651776  250842 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1213 09:51:05.651800  250842 cache.go:65] Caching tarball of preloaded images
	I1213 09:51:05.651815  250842 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:51:05.651881  250842 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 09:51:05.651891  250842 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 09:51:05.651995  250842 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/config.json ...
	I1213 09:51:05.652011  250842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/config.json: {Name:mk6ee8505e00c0c61b9d1016ba78c7660b67342f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:51:05.671141  250842 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:51:05.671163  250842 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:51:05.671181  250842 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:51:05.671210  250842 start.go:360] acquireMachinesLock for embed-certs-238987: {Name:mk02b6fd7d4340a12a1ae0bab50bef6fef7b466c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:51:05.671922  250842 start.go:364] duration metric: took 678.201µs to acquireMachinesLock for "embed-certs-238987"
	I1213 09:51:05.671961  250842 start.go:93] Provisioning new machine with config: &{Name:embed-certs-238987 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-238987 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 09:51:05.672036  250842 start.go:125] createHost starting for "" (driver="docker")
	I1213 09:51:05.675626  250842 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 09:51:05.675864  250842 start.go:159] libmachine.API.Create for "embed-certs-238987" (driver="docker")
	I1213 09:51:05.675904  250842 client.go:173] LocalClient.Create starting
	I1213 09:51:05.675983  250842 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem
	I1213 09:51:05.676033  250842 main.go:143] libmachine: Decoding PEM data...
	I1213 09:51:05.676053  250842 main.go:143] libmachine: Parsing certificate...
	I1213 09:51:05.676108  250842 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem
	I1213 09:51:05.676130  250842 main.go:143] libmachine: Decoding PEM data...
	I1213 09:51:05.676146  250842 main.go:143] libmachine: Parsing certificate...
	I1213 09:51:05.676496  250842 cli_runner.go:164] Run: docker network inspect embed-certs-238987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 09:51:05.692146  250842 cli_runner.go:211] docker network inspect embed-certs-238987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 09:51:05.692246  250842 network_create.go:284] running [docker network inspect embed-certs-238987] to gather additional debugging logs...
	I1213 09:51:05.692273  250842 cli_runner.go:164] Run: docker network inspect embed-certs-238987
	W1213 09:51:05.707770  250842 cli_runner.go:211] docker network inspect embed-certs-238987 returned with exit code 1
	I1213 09:51:05.707801  250842 network_create.go:287] error running [docker network inspect embed-certs-238987]: docker network inspect embed-certs-238987: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-238987 not found
	I1213 09:51:05.707815  250842 network_create.go:289] output of [docker network inspect embed-certs-238987]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-238987 not found
	
	** /stderr **
	I1213 09:51:05.707909  250842 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:51:05.724415  250842 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c365c32601a1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:cc:58:ff:fb:ac} reservation:<nil>}
	I1213 09:51:05.724751  250842 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5d2e6ae00d26 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:06:4d:2a:bb:ba} reservation:<nil>}
	I1213 09:51:05.725029  250842 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f516d686012e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:e6:44:cc:3a:d0} reservation:<nil>}
	I1213 09:51:05.725325  250842 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1c52282e2fe4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:9b:94:73:92:8b} reservation:<nil>}
	I1213 09:51:05.725743  250842 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a9dc0}
	I1213 09:51:05.725779  250842 network_create.go:124] attempt to create docker network embed-certs-238987 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 09:51:05.725836  250842 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-238987 embed-certs-238987
	I1213 09:51:05.781378  250842 network_create.go:108] docker network embed-certs-238987 192.168.85.0/24 created
	I1213 09:51:05.781410  250842 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-238987" container
	I1213 09:51:05.781480  250842 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 09:51:05.797288  250842 cli_runner.go:164] Run: docker volume create embed-certs-238987 --label name.minikube.sigs.k8s.io=embed-certs-238987 --label created_by.minikube.sigs.k8s.io=true
	I1213 09:51:05.815284  250842 oci.go:103] Successfully created a docker volume embed-certs-238987
	I1213 09:51:05.815383  250842 cli_runner.go:164] Run: docker run --rm --name embed-certs-238987-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-238987 --entrypoint /usr/bin/test -v embed-certs-238987:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 09:51:06.334871  250842 oci.go:107] Successfully prepared a docker volume embed-certs-238987
	I1213 09:51:06.334934  250842 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 09:51:06.334947  250842 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 09:51:06.335017  250842 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-238987:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 09:51:10.765765  250842 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-238987:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.430708787s)
	I1213 09:51:10.765795  250842 kic.go:203] duration metric: took 4.430845125s to extract preloaded images to volume ...
	W1213 09:51:10.765952  250842 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 09:51:10.766072  250842 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 09:51:10.850915  250842 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-238987 --name embed-certs-238987 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-238987 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-238987 --network embed-certs-238987 --ip 192.168.85.2 --volume embed-certs-238987:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 09:51:11.163080  250842 cli_runner.go:164] Run: docker container inspect embed-certs-238987 --format={{.State.Running}}
	I1213 09:51:11.185280  250842 cli_runner.go:164] Run: docker container inspect embed-certs-238987 --format={{.State.Status}}
	I1213 09:51:11.207356  250842 cli_runner.go:164] Run: docker exec embed-certs-238987 stat /var/lib/dpkg/alternatives/iptables
	I1213 09:51:11.254478  250842 oci.go:144] the created container "embed-certs-238987" has a running status.
	I1213 09:51:11.254504  250842 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/embed-certs-238987/id_rsa...
	I1213 09:51:11.776731  250842 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-2315/.minikube/machines/embed-certs-238987/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 09:51:11.806028  250842 cli_runner.go:164] Run: docker container inspect embed-certs-238987 --format={{.State.Status}}
	I1213 09:51:11.845310  250842 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 09:51:11.845334  250842 kic_runner.go:114] Args: [docker exec --privileged embed-certs-238987 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 09:51:11.922052  250842 cli_runner.go:164] Run: docker container inspect embed-certs-238987 --format={{.State.Status}}
	I1213 09:51:11.948705  250842 machine.go:94] provisionDockerMachine start ...
	I1213 09:51:11.948883  250842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-238987
	I1213 09:51:11.976018  250842 main.go:143] libmachine: Using SSH client type: native
	I1213 09:51:11.978545  250842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1213 09:51:11.978572  250842 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:51:12.164410  250842 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-238987
	
	I1213 09:51:12.164436  250842 ubuntu.go:182] provisioning hostname "embed-certs-238987"
	I1213 09:51:12.164501  250842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-238987
	I1213 09:51:12.184222  250842 main.go:143] libmachine: Using SSH client type: native
	I1213 09:51:12.184538  250842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1213 09:51:12.184561  250842 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-238987 && echo "embed-certs-238987" | sudo tee /etc/hostname
	I1213 09:51:12.354155  250842 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-238987
	
	I1213 09:51:12.354308  250842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-238987
	I1213 09:51:12.372098  250842 main.go:143] libmachine: Using SSH client type: native
	I1213 09:51:12.374877  250842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1213 09:51:12.374911  250842 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-238987' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-238987/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-238987' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:51:12.535748  250842 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:51:12.535816  250842 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 09:51:12.535858  250842 ubuntu.go:190] setting up certificates
	I1213 09:51:12.535896  250842 provision.go:84] configureAuth start
	I1213 09:51:12.536006  250842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-238987
	I1213 09:51:12.561005  250842 provision.go:143] copyHostCerts
	I1213 09:51:12.561073  250842 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 09:51:12.561082  250842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 09:51:12.561159  250842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 09:51:12.561253  250842 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 09:51:12.561258  250842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 09:51:12.561284  250842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 09:51:12.561363  250842 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 09:51:12.561368  250842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 09:51:12.561391  250842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 09:51:12.561439  250842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.embed-certs-238987 san=[127.0.0.1 192.168.85.2 embed-certs-238987 localhost minikube]
	I1213 09:51:12.786519  250842 provision.go:177] copyRemoteCerts
	I1213 09:51:12.786588  250842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:51:12.786635  250842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-238987
	I1213 09:51:12.805043  250842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/embed-certs-238987/id_rsa Username:docker}
	I1213 09:51:12.911171  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:51:12.929764  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1213 09:51:12.946862  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:51:12.964091  250842 provision.go:87] duration metric: took 428.154519ms to configureAuth
	I1213 09:51:12.964119  250842 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:51:12.964309  250842 config.go:182] Loaded profile config "embed-certs-238987": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 09:51:12.964322  250842 machine.go:97] duration metric: took 1.015593451s to provisionDockerMachine
	I1213 09:51:12.964329  250842 client.go:176] duration metric: took 7.28841419s to LocalClient.Create
	I1213 09:51:12.964349  250842 start.go:167] duration metric: took 7.288485985s to libmachine.API.Create "embed-certs-238987"
	I1213 09:51:12.964356  250842 start.go:293] postStartSetup for "embed-certs-238987" (driver="docker")
	I1213 09:51:12.964369  250842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:51:12.964417  250842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:51:12.964473  250842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-238987
	I1213 09:51:12.981389  250842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/embed-certs-238987/id_rsa Username:docker}
	I1213 09:51:13.087703  250842 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:51:13.090948  250842 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:51:13.090977  250842 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:51:13.090989  250842 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 09:51:13.091048  250842 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 09:51:13.091129  250842 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 09:51:13.091238  250842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:51:13.098655  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:51:13.116074  250842 start.go:296] duration metric: took 151.700093ms for postStartSetup
	I1213 09:51:13.116480  250842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-238987
	I1213 09:51:13.133370  250842 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/config.json ...
	I1213 09:51:13.133659  250842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:51:13.133719  250842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-238987
	I1213 09:51:13.153244  250842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/embed-certs-238987/id_rsa Username:docker}
	I1213 09:51:13.252313  250842 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:51:13.256911  250842 start.go:128] duration metric: took 7.584845497s to createHost
	I1213 09:51:13.256937  250842 start.go:83] releasing machines lock for "embed-certs-238987", held for 7.584994889s
	I1213 09:51:13.257010  250842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-238987
	I1213 09:51:13.282451  250842 ssh_runner.go:195] Run: cat /version.json
	I1213 09:51:13.282524  250842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-238987
	I1213 09:51:13.282564  250842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:51:13.282632  250842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-238987
	I1213 09:51:13.309480  250842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/embed-certs-238987/id_rsa Username:docker}
	I1213 09:51:13.319676  250842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/embed-certs-238987/id_rsa Username:docker}
	I1213 09:51:13.423149  250842 ssh_runner.go:195] Run: systemctl --version
	I1213 09:51:13.511456  250842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:51:13.515744  250842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:51:13.515868  250842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:51:13.544553  250842 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 09:51:13.544574  250842 start.go:496] detecting cgroup driver to use...
	I1213 09:51:13.544605  250842 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:51:13.544658  250842 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 09:51:13.559097  250842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 09:51:13.571675  250842 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:51:13.571741  250842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:51:13.588762  250842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:51:13.607155  250842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:51:13.729408  250842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:51:13.856517  250842 docker.go:234] disabling docker service ...
	I1213 09:51:13.856633  250842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:51:13.877828  250842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:51:13.890529  250842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:51:14.009192  250842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:51:14.161053  250842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:51:14.173867  250842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:51:14.188120  250842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 09:51:14.196743  250842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 09:51:14.205626  250842 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 09:51:14.205737  250842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 09:51:14.214816  250842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:51:14.223554  250842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 09:51:14.232798  250842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:51:14.241787  250842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:51:14.249708  250842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 09:51:14.258179  250842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 09:51:14.266657  250842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 09:51:14.275384  250842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:51:14.282928  250842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:51:14.290048  250842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:51:14.407404  250842 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 09:51:14.556459  250842 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 09:51:14.556606  250842 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 09:51:14.560502  250842 start.go:564] Will wait 60s for crictl version
	I1213 09:51:14.560576  250842 ssh_runner.go:195] Run: which crictl
	I1213 09:51:14.563963  250842 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:51:14.587580  250842 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 09:51:14.587794  250842 ssh_runner.go:195] Run: containerd --version
	I1213 09:51:14.607409  250842 ssh_runner.go:195] Run: containerd --version
	I1213 09:51:14.633446  250842 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1213 09:51:14.636367  250842 cli_runner.go:164] Run: docker network inspect embed-certs-238987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:51:14.653915  250842 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 09:51:14.657575  250842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:51:14.666815  250842 kubeadm.go:884] updating cluster {Name:embed-certs-238987 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-238987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:51:14.666954  250842 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 09:51:14.667023  250842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:51:14.691282  250842 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 09:51:14.691308  250842 containerd.go:534] Images already preloaded, skipping extraction
	I1213 09:51:14.691365  250842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:51:14.714803  250842 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 09:51:14.714825  250842 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:51:14.714834  250842 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1213 09:51:14.714932  250842 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-238987 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-238987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:51:14.715001  250842 ssh_runner.go:195] Run: sudo crictl info
	I1213 09:51:14.740336  250842 cni.go:84] Creating CNI manager for ""
	I1213 09:51:14.740360  250842 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:51:14.740376  250842 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:51:14.740398  250842 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-238987 NodeName:embed-certs-238987 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:51:14.740513  250842 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-238987"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:51:14.740584  250842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 09:51:14.747951  250842 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:51:14.748046  250842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:51:14.755054  250842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1213 09:51:14.767773  250842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 09:51:14.784186  250842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1213 09:51:14.798562  250842 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:51:14.802498  250842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:51:14.812734  250842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:51:14.927536  250842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:51:14.945775  250842 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987 for IP: 192.168.85.2
	I1213 09:51:14.945799  250842 certs.go:195] generating shared ca certs ...
	I1213 09:51:14.945815  250842 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:51:14.946009  250842 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 09:51:14.946078  250842 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 09:51:14.946089  250842 certs.go:257] generating profile certs ...
	I1213 09:51:14.946166  250842 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/client.key
	I1213 09:51:14.946184  250842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/client.crt with IP's: []
	I1213 09:51:15.041240  250842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/client.crt ...
	I1213 09:51:15.041282  250842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/client.crt: {Name:mkc2605120b075e95c360cd9c04bd229d36aa6bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:51:15.042318  250842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/client.key ...
	I1213 09:51:15.042349  250842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/client.key: {Name:mk485c5d3acf936458c0d0cd73c24136e5512759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:51:15.043165  250842 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/apiserver.key.a77a8a7d
	I1213 09:51:15.043188  250842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/apiserver.crt.a77a8a7d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 09:51:15.349521  250842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/apiserver.crt.a77a8a7d ...
	I1213 09:51:15.349552  250842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/apiserver.crt.a77a8a7d: {Name:mk3a57a3938b33678335608fc55f14514681df24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:51:15.350437  250842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/apiserver.key.a77a8a7d ...
	I1213 09:51:15.350458  250842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/apiserver.key.a77a8a7d: {Name:mkbe272aa52fad3e0444fdb2cad95ea0b5a2052b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:51:15.351164  250842 certs.go:382] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/apiserver.crt.a77a8a7d -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/apiserver.crt
	I1213 09:51:15.351244  250842 certs.go:386] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/apiserver.key.a77a8a7d -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/apiserver.key
	I1213 09:51:15.351308  250842 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/proxy-client.key
	I1213 09:51:15.351329  250842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/proxy-client.crt with IP's: []
	I1213 09:51:15.830850  250842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/proxy-client.crt ...
	I1213 09:51:15.830882  250842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/proxy-client.crt: {Name:mk6fd60a34af27fba2ab03c5b6e8b4d61a92041d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:51:15.831696  250842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/proxy-client.key ...
	I1213 09:51:15.831715  250842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/proxy-client.key: {Name:mk44671002e33d53e3bee965f4d0e4fe667a36aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:51:15.831918  250842 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 09:51:15.831965  250842 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 09:51:15.831979  250842 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:51:15.832006  250842 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:51:15.832034  250842 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:51:15.832061  250842 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 09:51:15.832115  250842 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:51:15.832674  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:51:15.854592  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:51:15.872086  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:51:15.890146  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 09:51:15.908518  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 09:51:15.926208  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 09:51:15.943004  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:51:15.959985  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/embed-certs-238987/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 09:51:15.976861  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:51:15.993948  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 09:51:16.016464  250842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 09:51:16.037039  250842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:51:16.050420  250842 ssh_runner.go:195] Run: openssl version
	I1213 09:51:16.056723  250842 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:51:16.064247  250842 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:51:16.071855  250842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:51:16.075599  250842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:51:16.075714  250842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:51:16.116829  250842 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:51:16.125043  250842 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 09:51:16.132173  250842 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 09:51:16.139350  250842 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 09:51:16.146838  250842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 09:51:16.150408  250842 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 09:51:16.150516  250842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 09:51:16.191356  250842 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:51:16.198821  250842 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
	I1213 09:51:16.206397  250842 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 09:51:16.213780  250842 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 09:51:16.221454  250842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 09:51:16.225072  250842 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 09:51:16.225176  250842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 09:51:16.265978  250842 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:51:16.273945  250842 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
	I1213 09:51:16.281629  250842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:51:16.285614  250842 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 09:51:16.285667  250842 kubeadm.go:401] StartCluster: {Name:embed-certs-238987 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-238987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:51:16.285740  250842 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 09:51:16.285795  250842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:51:16.322360  250842 cri.go:89] found id: ""
	I1213 09:51:16.322436  250842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:51:16.331979  250842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:51:16.343663  250842 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:51:16.343734  250842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:51:16.351705  250842 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:51:16.351727  250842 kubeadm.go:158] found existing configuration files:
	
	I1213 09:51:16.351779  250842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:51:16.359529  250842 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:51:16.359594  250842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:51:16.367365  250842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:51:16.375234  250842 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:51:16.375318  250842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:51:16.382692  250842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:51:16.390786  250842 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:51:16.390872  250842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:51:16.398220  250842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:51:16.405856  250842 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:51:16.405940  250842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:51:16.413412  250842 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:51:16.456993  250842 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 09:51:16.457279  250842 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:51:16.495617  250842 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:51:16.495735  250842 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 09:51:16.495798  250842 kubeadm.go:319] OS: Linux
	I1213 09:51:16.495866  250842 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:51:16.495945  250842 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:51:16.496017  250842 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:51:16.496093  250842 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:51:16.496162  250842 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:51:16.496237  250842 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:51:16.496305  250842 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:51:16.496383  250842 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:51:16.496449  250842 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:51:16.578796  250842 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:51:16.578979  250842 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:51:16.579095  250842 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:51:16.585213  250842 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:51:16.591115  250842 out.go:252]   - Generating certificates and keys ...
	I1213 09:51:16.591217  250842 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:51:16.591288  250842 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:51:17.322434  250842 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 09:51:17.546133  250842 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 09:51:18.017443  250842 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 09:51:18.212007  250842 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 09:51:18.548334  250842 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 09:51:18.548763  250842 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-238987 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 09:51:18.693126  250842 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 09:51:18.693456  250842 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-238987 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 09:51:19.628746  250842 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 09:51:19.733453  250842 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 09:51:20.679414  250842 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 09:51:20.679776  250842 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:51:21.797776  250842 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:51:22.392854  250842 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:51:23.321378  250842 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:51:24.057682  250842 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:51:24.656324  250842 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:51:24.656908  250842 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:51:24.659925  250842 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:51:24.663601  250842 out.go:252]   - Booting up control plane ...
	I1213 09:51:24.663714  250842 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:51:24.663792  250842 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:51:24.663858  250842 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:51:24.682563  250842 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:51:24.682929  250842 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:51:24.690259  250842 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:51:24.690593  250842 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:51:24.690640  250842 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:51:24.832836  250842 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:51:24.832956  250842 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:51:25.833760  250842 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001786896s
	I1213 09:51:25.837331  250842 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 09:51:25.837426  250842 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1213 09:51:25.837514  250842 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 09:51:25.837596  250842 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 09:51:29.603913  250842 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.765982883s
	I1213 09:51:31.496424  250842 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.659060268s
	I1213 09:51:32.338947  250842 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501427556s
	I1213 09:51:32.374487  250842 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 09:51:32.391346  250842 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 09:51:32.407117  250842 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 09:51:32.407309  250842 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-238987 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 09:51:32.419350  250842 kubeadm.go:319] [bootstrap-token] Using token: 495sc3.xr5azsap1uio3wzr
	I1213 09:51:32.422333  250842 out.go:252]   - Configuring RBAC rules ...
	I1213 09:51:32.422456  250842 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 09:51:32.427367  250842 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 09:51:32.435915  250842 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 09:51:32.442213  250842 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 09:51:32.446116  250842 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 09:51:32.450368  250842 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 09:51:32.747492  250842 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 09:51:33.190972  250842 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 09:51:33.749115  250842 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 09:51:33.750371  250842 kubeadm.go:319] 
	I1213 09:51:33.750443  250842 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 09:51:33.750448  250842 kubeadm.go:319] 
	I1213 09:51:33.750525  250842 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 09:51:33.750529  250842 kubeadm.go:319] 
	I1213 09:51:33.750554  250842 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 09:51:33.750612  250842 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 09:51:33.750663  250842 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 09:51:33.750666  250842 kubeadm.go:319] 
	I1213 09:51:33.750731  250842 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 09:51:33.750737  250842 kubeadm.go:319] 
	I1213 09:51:33.750784  250842 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 09:51:33.750788  250842 kubeadm.go:319] 
	I1213 09:51:33.750840  250842 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 09:51:33.750915  250842 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 09:51:33.750983  250842 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 09:51:33.750987  250842 kubeadm.go:319] 
	I1213 09:51:33.751071  250842 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 09:51:33.751148  250842 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 09:51:33.751152  250842 kubeadm.go:319] 
	I1213 09:51:33.751236  250842 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 495sc3.xr5azsap1uio3wzr \
	I1213 09:51:33.751340  250842 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ca396702c13b22ce2684a629edf944e26d46670a49ccbab03823eeba29ec5e8d \
	I1213 09:51:33.751360  250842 kubeadm.go:319] 	--control-plane 
	I1213 09:51:33.751364  250842 kubeadm.go:319] 
	I1213 09:51:33.751448  250842 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 09:51:33.751455  250842 kubeadm.go:319] 
	I1213 09:51:33.751561  250842 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 495sc3.xr5azsap1uio3wzr \
	I1213 09:51:33.751665  250842 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ca396702c13b22ce2684a629edf944e26d46670a49ccbab03823eeba29ec5e8d 
	I1213 09:51:33.755660  250842 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 09:51:33.755875  250842 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:51:33.755978  250842 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:51:33.755999  250842 cni.go:84] Creating CNI manager for ""
	I1213 09:51:33.756007  250842 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:51:33.760977  250842 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 09:51:33.763911  250842 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 09:51:33.768018  250842 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 09:51:33.768040  250842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 09:51:33.784596  250842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 09:51:34.097213  250842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 09:51:34.097329  250842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:51:34.097353  250842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-238987 minikube.k8s.io/updated_at=2025_12_13T09_51_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=embed-certs-238987 minikube.k8s.io/primary=true
	I1213 09:51:34.410615  250842 ops.go:34] apiserver oom_adj: -16
	I1213 09:51:34.410719  250842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:51:34.910827  250842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:51:35.411712  250842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:51:35.911446  250842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:51:36.411543  250842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:51:36.911770  250842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:51:37.410959  250842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:51:37.911001  250842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:51:38.039781  250842 kubeadm.go:1114] duration metric: took 3.942511706s to wait for elevateKubeSystemPrivileges
	I1213 09:51:38.039814  250842 kubeadm.go:403] duration metric: took 21.754150954s to StartCluster
	I1213 09:51:38.039832  250842 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:51:38.039898  250842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:51:38.041309  250842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:51:38.041552  250842 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 09:51:38.041664  250842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 09:51:38.042014  250842 config.go:182] Loaded profile config "embed-certs-238987": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 09:51:38.042130  250842 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:51:38.042196  250842 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-238987"
	I1213 09:51:38.042219  250842 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-238987"
	I1213 09:51:38.042248  250842 host.go:66] Checking if "embed-certs-238987" exists ...
	I1213 09:51:38.042779  250842 cli_runner.go:164] Run: docker container inspect embed-certs-238987 --format={{.State.Status}}
	I1213 09:51:38.043379  250842 addons.go:70] Setting default-storageclass=true in profile "embed-certs-238987"
	I1213 09:51:38.043403  250842 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-238987"
	I1213 09:51:38.043737  250842 cli_runner.go:164] Run: docker container inspect embed-certs-238987 --format={{.State.Status}}
	I1213 09:51:38.045098  250842 out.go:179] * Verifying Kubernetes components...
	I1213 09:51:38.052383  250842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:51:38.092209  250842 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:51:38.100331  250842 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:51:38.100364  250842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:51:38.100433  250842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-238987
	I1213 09:51:38.105003  250842 addons.go:239] Setting addon default-storageclass=true in "embed-certs-238987"
	I1213 09:51:38.105048  250842 host.go:66] Checking if "embed-certs-238987" exists ...
	I1213 09:51:38.105527  250842 cli_runner.go:164] Run: docker container inspect embed-certs-238987 --format={{.State.Status}}
	I1213 09:51:38.131601  250842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/embed-certs-238987/id_rsa Username:docker}
	I1213 09:51:38.150946  250842 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:51:38.150966  250842 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:51:38.151038  250842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-238987
	I1213 09:51:38.178036  250842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/embed-certs-238987/id_rsa Username:docker}
	I1213 09:51:38.469989  250842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 09:51:38.470138  250842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:51:38.481760  250842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:51:38.496306  250842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:51:39.195345  250842 node_ready.go:35] waiting up to 6m0s for node "embed-certs-238987" to be "Ready" ...
	I1213 09:51:39.195735  250842 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1213 09:51:39.594847  250842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.098506232s)
	I1213 09:51:39.600098  250842 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1213 09:51:39.602981  250842 addons.go:530] duration metric: took 1.560840608s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1213 09:51:39.699918  250842 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-238987" context rescaled to 1 replicas
	W1213 09:51:41.198925  250842 node_ready.go:57] node "embed-certs-238987" has "Ready":"False" status (will retry)
	W1213 09:51:43.698270  250842 node_ready.go:57] node "embed-certs-238987" has "Ready":"False" status (will retry)
	I1213 09:51:46.392677  201245 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000234607s
	I1213 09:51:46.392720  201245 kubeadm.go:319] 
	I1213 09:51:46.392776  201245 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:51:46.392812  201245 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:51:46.392915  201245 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:51:46.392926  201245 kubeadm.go:319] 
	I1213 09:51:46.393024  201245 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:51:46.393060  201245 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:51:46.393090  201245 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:51:46.393098  201245 kubeadm.go:319] 
	I1213 09:51:46.397243  201245 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:51:46.397677  201245 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:51:46.397793  201245 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:51:46.398065  201245 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 09:51:46.398075  201245 kubeadm.go:319] 
	I1213 09:51:46.398144  201245 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 09:51:46.398201  201245 kubeadm.go:403] duration metric: took 12m19.37171003s to StartCluster
	I1213 09:51:46.398239  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:51:46.398302  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:51:46.424143  201245 cri.go:89] found id: ""
	I1213 09:51:46.424166  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.424180  201245 logs.go:284] No container was found matching "kube-apiserver"
	I1213 09:51:46.424186  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 09:51:46.424254  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:51:46.452320  201245 cri.go:89] found id: ""
	I1213 09:51:46.452342  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.452350  201245 logs.go:284] No container was found matching "etcd"
	I1213 09:51:46.452372  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 09:51:46.452429  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:51:46.478830  201245 cri.go:89] found id: ""
	I1213 09:51:46.478852  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.478860  201245 logs.go:284] No container was found matching "coredns"
	I1213 09:51:46.478866  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:51:46.478935  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:51:46.506032  201245 cri.go:89] found id: ""
	I1213 09:51:46.506054  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.506063  201245 logs.go:284] No container was found matching "kube-scheduler"
	I1213 09:51:46.506069  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:51:46.506128  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:51:46.533342  201245 cri.go:89] found id: ""
	I1213 09:51:46.533365  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.533374  201245 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:51:46.533380  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:51:46.533442  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:51:46.560163  201245 cri.go:89] found id: ""
	I1213 09:51:46.560251  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.560275  201245 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 09:51:46.560299  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 09:51:46.560380  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:51:46.586972  201245 cri.go:89] found id: ""
	I1213 09:51:46.587000  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.587009  201245 logs.go:284] No container was found matching "kindnet"
	I1213 09:51:46.587015  201245 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:51:46.587084  201245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:51:46.612430  201245 cri.go:89] found id: ""
	I1213 09:51:46.612470  201245 logs.go:282] 0 containers: []
	W1213 09:51:46.612479  201245 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:51:46.612489  201245 logs.go:123] Gathering logs for kubelet ...
	I1213 09:51:46.612500  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:51:46.672952  201245 logs.go:123] Gathering logs for dmesg ...
	I1213 09:51:46.672991  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:51:46.686821  201245 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:51:46.686850  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:51:46.754373  201245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:51:46.754403  201245 logs.go:123] Gathering logs for containerd ...
	I1213 09:51:46.754416  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 09:51:46.798554  201245 logs.go:123] Gathering logs for container status ...
	I1213 09:51:46.798588  201245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 09:51:46.827488  201245 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000234607s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 09:51:46.827565  201245 out.go:285] * 
	W1213 09:51:46.827620  201245 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000234607s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:51:46.827639  201245 out.go:285] * 
	W1213 09:51:46.829784  201245 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:51:46.836423  201245 out.go:203] 
	W1213 09:51:46.839750  201245 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000234607s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 09:51:46.839797  201245 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 09:51:46.839820  201245 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 09:51:46.842987  201245 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 09:43:39 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:39.204331335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:43:39 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:39.205297474Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.475456255s"
	Dec 13 09:43:39 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:39.206353163Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 13 09:43:39 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:39.207243059Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 13 09:43:39 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:39.847369398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 13 09:43:39 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:39.849754849Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 13 09:43:39 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:39.852053947Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 13 09:43:39 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:39.856727846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 13 09:43:39 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:39.857964978Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 650.582004ms"
	Dec 13 09:43:39 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:39.858128729Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 13 09:43:39 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:39.858992343Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 09:43:41 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:41.830871648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:43:41 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:41.833501318Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21140371"
	Dec 13 09:43:41 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:41.835594359Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:43:41 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:41.840485655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:43:41 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:41.841864425Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.982721697s"
	Dec 13 09:43:41 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:43:41.841987396Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\""
	Dec 13 09:48:30 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:48:30.753205842Z" level=info msg="container event discarded" container=5e07d9e3e87176d1603b47c2980dde6d1d4b19b93eab68c347b71ac39eccbe69 type=CONTAINER_DELETED_EVENT
	Dec 13 09:48:30 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:48:30.767489876Z" level=info msg="container event discarded" container=e32b81192e7e434140fdcd17fc2f42ccbd11ffd2cfc7fac30d9553008556c1e8 type=CONTAINER_DELETED_EVENT
	Dec 13 09:48:30 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:48:30.778820716Z" level=info msg="container event discarded" container=67a89f46f0d811d5f76c63cb23ba54881fdf0e462a8cd44e829bbc2a89f93376 type=CONTAINER_DELETED_EVENT
	Dec 13 09:48:30 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:48:30.778870596Z" level=info msg="container event discarded" container=0d6f8c3c8ad390f11c793461707d676f9f884048c521c07a5d4c77dd7ad64854 type=CONTAINER_DELETED_EVENT
	Dec 13 09:48:30 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:48:30.797081978Z" level=info msg="container event discarded" container=37f8a956e29dcc9add661f89c58aa733ef37717989ff1138cc498ffd003ef6f6 type=CONTAINER_DELETED_EVENT
	Dec 13 09:48:30 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:48:30.797133104Z" level=info msg="container event discarded" container=275a507849d55dfdb1be2fa7bffc96b87d65376d6714657cf59972d3bce2f88d type=CONTAINER_DELETED_EVENT
	Dec 13 09:48:30 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:48:30.813331831Z" level=info msg="container event discarded" container=1e689b0cfce988cfd85cfcfc882c6f226b181ca26c1f151683e022967afb950d type=CONTAINER_DELETED_EVENT
	Dec 13 09:48:30 kubernetes-upgrade-355809 containerd[558]: time="2025-12-13T09:48:30.813385633Z" level=info msg="container event discarded" container=5d335a22d0cfdbabb0f8f96bf02cabed954703beda73c72ab426965a408fe3c7 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 09:51:48 up  1:34,  0 user,  load average: 2.78, 2.28, 2.29
	Linux kubernetes-upgrade-355809 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 09:51:45 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:51:46 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 09:51:46 kubernetes-upgrade-355809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:51:46 kubernetes-upgrade-355809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:51:46 kubernetes-upgrade-355809 kubelet[14010]: E1213 09:51:46.312679   14010 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:51:46 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:51:46 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:51:47 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 09:51:47 kubernetes-upgrade-355809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:51:47 kubernetes-upgrade-355809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:51:47 kubernetes-upgrade-355809 kubelet[14105]: E1213 09:51:47.098640   14105 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:51:47 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:51:47 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:51:47 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 09:51:47 kubernetes-upgrade-355809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:51:47 kubernetes-upgrade-355809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:51:47 kubernetes-upgrade-355809 kubelet[14117]: E1213 09:51:47.842278   14117 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:51:47 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:51:47 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 09:51:48 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 09:51:48 kubernetes-upgrade-355809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:51:48 kubernetes-upgrade-355809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:51:48 kubernetes-upgrade-355809 kubelet[14217]: E1213 09:51:48.599041   14217 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 09:51:48 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 09:51:48 kubernetes-upgrade-355809 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-355809 -n kubernetes-upgrade-355809
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-355809 -n kubernetes-upgrade-355809: exit status 2 (328.245578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-355809" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-355809" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-355809
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-355809: (2.278366427s)
--- FAIL: TestKubernetesUpgrade (802.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (509.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 09:52:14.443630    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m27.559552452s)

                                                
                                                
-- stdout --
	* [no-preload-328069] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-328069" primary control-plane node in "no-preload-328069" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:51:51.564795  254588 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:51:51.564928  254588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:51:51.564934  254588 out.go:374] Setting ErrFile to fd 2...
	I1213 09:51:51.565660  254588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:51:51.566163  254588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:51:51.566794  254588 out.go:368] Setting JSON to false
	I1213 09:51:51.567883  254588 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5664,"bootTime":1765613848,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:51:51.567989  254588 start.go:143] virtualization:  
	I1213 09:51:51.571806  254588 out.go:179] * [no-preload-328069] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:51:51.575064  254588 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:51:51.575205  254588 notify.go:221] Checking for updates...
	I1213 09:51:51.581395  254588 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:51:51.584526  254588 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:51:51.587419  254588 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:51:51.590354  254588 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:51:51.593379  254588 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:51:51.596908  254588 config.go:182] Loaded profile config "embed-certs-238987": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 09:51:51.597114  254588 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:51:51.620912  254588 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:51:51.621047  254588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:51:51.688917  254588 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:51:51.679264711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:51:51.689015  254588 docker.go:319] overlay module found
	I1213 09:51:51.692380  254588 out.go:179] * Using the docker driver based on user configuration
	I1213 09:51:51.695312  254588 start.go:309] selected driver: docker
	I1213 09:51:51.695334  254588 start.go:927] validating driver "docker" against <nil>
	I1213 09:51:51.695349  254588 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:51:51.696121  254588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:51:51.759490  254588 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:51:51.749356762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:51:51.759740  254588 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 09:51:51.759979  254588 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:51:51.763066  254588 out.go:179] * Using Docker driver with root privileges
	I1213 09:51:51.765880  254588 cni.go:84] Creating CNI manager for ""
	I1213 09:51:51.765941  254588 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:51:51.765957  254588 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 09:51:51.766029  254588 start.go:353] cluster config:
	{Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:51:51.769818  254588 out.go:179] * Starting "no-preload-328069" primary control-plane node in "no-preload-328069" cluster
	I1213 09:51:51.772683  254588 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 09:51:51.775606  254588 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:51:51.778451  254588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:51:51.778538  254588 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:51:51.778599  254588 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/config.json ...
	I1213 09:51:51.778630  254588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/config.json: {Name:mkfe33ee0a70e3dd6d357b02f4430e67cc35d526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:51:51.778890  254588 cache.go:107] acquiring lock: {Name:mk1139c6b82931eb02e4fc01be1646c4b5fb6137 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:51:51.778951  254588 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 09:51:51.778962  254588 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 78.483µs
	I1213 09:51:51.778974  254588 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 09:51:51.778986  254588 cache.go:107] acquiring lock: {Name:mke9e3c7a7c5dbec5022163863159aa6109df603 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:51:51.779049  254588 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:51:51.779459  254588 cache.go:107] acquiring lock: {Name:mkc53cc9694a66de0b7b66cb687f9b4074b3c86b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:51:51.779596  254588 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:51:51.779871  254588 cache.go:107] acquiring lock: {Name:mk349a8caa03fed06b3fb3e0b39b00347dcb9b37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:51:51.780529  254588 cache.go:107] acquiring lock: {Name:mk07cf085b7776efa96cbbe85a2f7495a2806d09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:51:51.781936  254588 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:51:51.782398  254588 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:51:51.782745  254588 cache.go:107] acquiring lock: {Name:mk0e27a2c36e6dbaae7432bc4e472a6212c75814 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:51:51.782904  254588 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 09:51:51.782915  254588 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 174.59µs
	I1213 09:51:51.782923  254588 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 09:51:51.782935  254588 cache.go:107] acquiring lock: {Name:mkdbfdeb98feed2961bb0c3f8a6d24ab310632c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:51:51.783041  254588 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:51:51.783637  254588 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:51:51.783832  254588 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:51:51.784391  254588 cache.go:107] acquiring lock: {Name:mk3eb587f4f7424524980a5884c47c318ddc6f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:51:51.784454  254588 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 09:51:51.784462  254588 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 75.406µs
	I1213 09:51:51.784469  254588 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 09:51:51.784627  254588 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:51:51.784966  254588 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:51:51.785158  254588 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:51:51.803021  254588 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:51:51.803045  254588 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:51:51.803062  254588 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:51:51.803091  254588 start.go:360] acquireMachinesLock for no-preload-328069: {Name:mkb27df066f9039321ce696d5a7013e52143011a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:51:51.803901  254588 start.go:364] duration metric: took 784.212µs to acquireMachinesLock for "no-preload-328069"
	I1213 09:51:51.803942  254588 start.go:93] Provisioning new machine with config: &{Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 09:51:51.804012  254588 start.go:125] createHost starting for "" (driver="docker")
	I1213 09:51:51.809286  254588 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 09:51:51.809536  254588 start.go:159] libmachine.API.Create for "no-preload-328069" (driver="docker")
	I1213 09:51:51.809576  254588 client.go:173] LocalClient.Create starting
	I1213 09:51:51.809644  254588 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem
	I1213 09:51:51.809682  254588 main.go:143] libmachine: Decoding PEM data...
	I1213 09:51:51.809701  254588 main.go:143] libmachine: Parsing certificate...
	I1213 09:51:51.809763  254588 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem
	I1213 09:51:51.809790  254588 main.go:143] libmachine: Decoding PEM data...
	I1213 09:51:51.809808  254588 main.go:143] libmachine: Parsing certificate...
	I1213 09:51:51.810175  254588 cli_runner.go:164] Run: docker network inspect no-preload-328069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 09:51:51.834028  254588 cli_runner.go:211] docker network inspect no-preload-328069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 09:51:51.834122  254588 network_create.go:284] running [docker network inspect no-preload-328069] to gather additional debugging logs...
	I1213 09:51:51.834146  254588 cli_runner.go:164] Run: docker network inspect no-preload-328069
	W1213 09:51:51.851256  254588 cli_runner.go:211] docker network inspect no-preload-328069 returned with exit code 1
	I1213 09:51:51.851287  254588 network_create.go:287] error running [docker network inspect no-preload-328069]: docker network inspect no-preload-328069: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-328069 not found
	I1213 09:51:51.851301  254588 network_create.go:289] output of [docker network inspect no-preload-328069]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-328069 not found
	
	** /stderr **
	I1213 09:51:51.851405  254588 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:51:51.868975  254588 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c365c32601a1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:cc:58:ff:fb:ac} reservation:<nil>}
	I1213 09:51:51.869331  254588 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5d2e6ae00d26 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:06:4d:2a:bb:ba} reservation:<nil>}
	I1213 09:51:51.869777  254588 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f516d686012e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:e6:44:cc:3a:d0} reservation:<nil>}
	I1213 09:51:51.870206  254588 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b3d510}
	I1213 09:51:51.870261  254588 network_create.go:124] attempt to create docker network no-preload-328069 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 09:51:51.870323  254588 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-328069 no-preload-328069
	I1213 09:51:51.951131  254588 network_create.go:108] docker network no-preload-328069 192.168.76.0/24 created
	I1213 09:51:51.951162  254588 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-328069" container
	I1213 09:51:51.951236  254588 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 09:51:51.969511  254588 cli_runner.go:164] Run: docker volume create no-preload-328069 --label name.minikube.sigs.k8s.io=no-preload-328069 --label created_by.minikube.sigs.k8s.io=true
	I1213 09:51:51.997619  254588 oci.go:103] Successfully created a docker volume no-preload-328069
	I1213 09:51:51.997718  254588 cli_runner.go:164] Run: docker run --rm --name no-preload-328069-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-328069 --entrypoint /usr/bin/test -v no-preload-328069:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 09:51:52.124644  254588 cache.go:162] opening:  /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 09:51:52.129467  254588 cache.go:162] opening:  /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 09:51:52.135465  254588 cache.go:162] opening:  /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 09:51:52.153858  254588 cache.go:162] opening:  /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 09:51:52.198342  254588 cache.go:162] opening:  /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 09:51:52.560764  254588 cache.go:157] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 09:51:52.560844  254588 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 777.907252ms
	I1213 09:51:52.560873  254588 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 09:51:52.678826  254588 oci.go:107] Successfully prepared a docker volume no-preload-328069
	I1213 09:51:52.678865  254588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1213 09:51:52.678996  254588 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 09:51:52.679118  254588 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 09:51:52.742415  254588 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-328069 --name no-preload-328069 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-328069 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-328069 --network no-preload-328069 --ip 192.168.76.2 --volume no-preload-328069:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 09:51:53.099266  254588 cache.go:157] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 09:51:53.099341  254588 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.319886994s
	I1213 09:51:53.099368  254588 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 09:51:53.137313  254588 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Running}}
	I1213 09:51:53.214533  254588 cache.go:157] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 09:51:53.214566  254588 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.434040826s
	I1213 09:51:53.214601  254588 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 09:51:53.221259  254588 cache.go:157] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 09:51:53.221374  254588 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.441506538s
	I1213 09:51:53.221402  254588 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 09:51:53.238431  254588 cache.go:157] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 09:51:53.238721  254588 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.459731713s
	I1213 09:51:53.238757  254588 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 09:51:53.239318  254588 cache.go:87] Successfully saved all images to host disk.
	I1213 09:51:53.242843  254588 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 09:51:53.269381  254588 cli_runner.go:164] Run: docker exec no-preload-328069 stat /var/lib/dpkg/alternatives/iptables
	I1213 09:51:53.322293  254588 oci.go:144] the created container "no-preload-328069" has a running status.
	I1213 09:51:53.322346  254588 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa...
	I1213 09:51:53.496901  254588 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 09:51:53.547054  254588 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 09:51:53.578882  254588 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 09:51:53.578902  254588 kic_runner.go:114] Args: [docker exec --privileged no-preload-328069 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 09:51:53.636414  254588 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 09:51:53.656771  254588 machine.go:94] provisionDockerMachine start ...
	I1213 09:51:53.656863  254588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 09:51:53.683228  254588 main.go:143] libmachine: Using SSH client type: native
	I1213 09:51:53.683596  254588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1213 09:51:53.683617  254588 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:51:53.684781  254588 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 09:51:56.839087  254588 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-328069
	
	I1213 09:51:56.839113  254588 ubuntu.go:182] provisioning hostname "no-preload-328069"
	I1213 09:51:56.839182  254588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 09:51:56.859314  254588 main.go:143] libmachine: Using SSH client type: native
	I1213 09:51:56.859809  254588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1213 09:51:56.859831  254588 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-328069 && echo "no-preload-328069" | sudo tee /etc/hostname
	I1213 09:51:57.023784  254588 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-328069
	
	I1213 09:51:57.023873  254588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 09:51:57.042478  254588 main.go:143] libmachine: Using SSH client type: native
	I1213 09:51:57.042809  254588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1213 09:51:57.042829  254588 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-328069' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-328069/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-328069' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:51:57.195657  254588 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:51:57.195686  254588 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 09:51:57.195721  254588 ubuntu.go:190] setting up certificates
	I1213 09:51:57.195743  254588 provision.go:84] configureAuth start
	I1213 09:51:57.195806  254588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-328069
	I1213 09:51:57.214004  254588 provision.go:143] copyHostCerts
	I1213 09:51:57.214083  254588 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 09:51:57.214097  254588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 09:51:57.214181  254588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 09:51:57.214280  254588 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 09:51:57.214292  254588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 09:51:57.214320  254588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 09:51:57.214375  254588 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 09:51:57.214385  254588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 09:51:57.214409  254588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 09:51:57.214459  254588 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.no-preload-328069 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-328069]
	I1213 09:51:57.855595  254588 provision.go:177] copyRemoteCerts
	I1213 09:51:57.855660  254588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:51:57.855710  254588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 09:51:57.875020  254588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 09:51:57.979573  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:51:58.012320  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:51:58.032857  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:51:58.051416  254588 provision.go:87] duration metric: took 855.652095ms to configureAuth
	I1213 09:51:58.051448  254588 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:51:58.051719  254588 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:51:58.051736  254588 machine.go:97] duration metric: took 4.394941557s to provisionDockerMachine
	I1213 09:51:58.051763  254588 client.go:176] duration metric: took 6.242162041s to LocalClient.Create
	I1213 09:51:58.051795  254588 start.go:167] duration metric: took 6.242260184s to libmachine.API.Create "no-preload-328069"
	I1213 09:51:58.051807  254588 start.go:293] postStartSetup for "no-preload-328069" (driver="docker")
	I1213 09:51:58.051818  254588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:51:58.051904  254588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:51:58.051968  254588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 09:51:58.070156  254588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 09:51:58.187849  254588 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:51:58.191705  254588 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:51:58.191732  254588 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:51:58.191744  254588 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 09:51:58.191802  254588 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 09:51:58.191879  254588 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 09:51:58.191985  254588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:51:58.200854  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:51:58.221170  254588 start.go:296] duration metric: took 169.348903ms for postStartSetup
	I1213 09:51:58.221535  254588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-328069
	I1213 09:51:58.239690  254588 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/config.json ...
	I1213 09:51:58.239979  254588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:51:58.240030  254588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 09:51:58.257734  254588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 09:51:58.361042  254588 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:51:58.366440  254588 start.go:128] duration metric: took 6.562407421s to createHost
	I1213 09:51:58.366466  254588 start.go:83] releasing machines lock for "no-preload-328069", held for 6.562545572s
	I1213 09:51:58.366538  254588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-328069
	I1213 09:51:58.384401  254588 ssh_runner.go:195] Run: cat /version.json
	I1213 09:51:58.384460  254588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 09:51:58.384707  254588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:51:58.384765  254588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 09:51:58.408418  254588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 09:51:58.410950  254588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 09:51:58.511310  254588 ssh_runner.go:195] Run: systemctl --version
	I1213 09:51:58.605302  254588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:51:58.609928  254588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:51:58.609999  254588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:51:58.640275  254588 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 09:51:58.640296  254588 start.go:496] detecting cgroup driver to use...
	I1213 09:51:58.640328  254588 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:51:58.640375  254588 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 09:51:58.656412  254588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 09:51:58.669581  254588 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:51:58.669668  254588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:51:58.688672  254588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:51:58.708925  254588 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:51:58.837770  254588 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:51:58.971246  254588 docker.go:234] disabling docker service ...
	I1213 09:51:58.971317  254588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:51:59.005140  254588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:51:59.021512  254588 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:51:59.140331  254588 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:51:59.268599  254588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:51:59.282776  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:51:59.298873  254588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 09:51:59.309429  254588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 09:51:59.320727  254588 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 09:51:59.320808  254588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 09:51:59.329972  254588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:51:59.338897  254588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 09:51:59.349227  254588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:51:59.359139  254588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:51:59.367575  254588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 09:51:59.376987  254588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 09:51:59.385759  254588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 09:51:59.395773  254588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:51:59.403855  254588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:51:59.411271  254588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:51:59.531012  254588 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 09:51:59.631046  254588 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 09:51:59.631114  254588 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 09:51:59.636539  254588 start.go:564] Will wait 60s for crictl version
	I1213 09:51:59.636606  254588 ssh_runner.go:195] Run: which crictl
	I1213 09:51:59.641417  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:51:59.668912  254588 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 09:51:59.668981  254588 ssh_runner.go:195] Run: containerd --version
	I1213 09:51:59.690432  254588 ssh_runner.go:195] Run: containerd --version
	I1213 09:51:59.717476  254588 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 09:51:59.720434  254588 cli_runner.go:164] Run: docker network inspect no-preload-328069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:51:59.739414  254588 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 09:51:59.743935  254588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:51:59.755339  254588 kubeadm.go:884] updating cluster {Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:51:59.755448  254588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:51:59.755503  254588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:51:59.779725  254588 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 09:51:59.779750  254588 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 09:51:59.779790  254588 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:51:59.779847  254588 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:51:59.779976  254588 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1213 09:51:59.780007  254588 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:51:59.780053  254588 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1213 09:51:59.780096  254588 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:51:59.780120  254588 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:51:59.780177  254588 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:51:59.781450  254588 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:51:59.781700  254588 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1213 09:51:59.781844  254588 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:51:59.781957  254588 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1213 09:51:59.782128  254588 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:51:59.782263  254588 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:51:59.782307  254588 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:51:59.782425  254588 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:52:00.022638  254588 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1213 09:52:00.022798  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1213 09:52:00.047686  254588 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1213 09:52:00.047834  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:52:00.057995  254588 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1213 09:52:00.058175  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1213 09:52:00.103345  254588 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1213 09:52:00.103483  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:52:00.103809  254588 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1213 09:52:00.104293  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:52:00.145554  254588 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1213 09:52:00.157372  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:52:00.159628  254588 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1213 09:52:00.159713  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:52:00.307749  254588 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1213 09:52:00.307800  254588 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1213 09:52:00.307856  254588 ssh_runner.go:195] Run: which crictl
	I1213 09:52:00.313347  254588 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1213 09:52:00.313419  254588 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:52:00.313480  254588 ssh_runner.go:195] Run: which crictl
	I1213 09:52:00.391855  254588 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1213 09:52:00.391904  254588 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1213 09:52:00.396245  254588 ssh_runner.go:195] Run: which crictl
	I1213 09:52:00.418289  254588 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1213 09:52:00.418335  254588 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:52:00.418387  254588 ssh_runner.go:195] Run: which crictl
	I1213 09:52:00.418486  254588 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1213 09:52:00.418509  254588 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:52:00.418546  254588 ssh_runner.go:195] Run: which crictl
	I1213 09:52:00.471094  254588 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1213 09:52:00.471155  254588 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:52:00.471223  254588 ssh_runner.go:195] Run: which crictl
	I1213 09:52:00.471330  254588 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1213 09:52:00.471369  254588 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:52:00.471401  254588 ssh_runner.go:195] Run: which crictl
	I1213 09:52:00.471633  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:52:00.471648  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 09:52:00.471731  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 09:52:00.471787  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:52:00.471881  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:52:00.608402  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:52:00.608479  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:52:00.608414  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:52:00.608441  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:52:00.608644  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 09:52:00.608570  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 09:52:00.608609  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:52:00.724186  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:52:00.724338  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 09:52:00.724423  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 09:52:00.724512  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:52:00.724590  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 09:52:00.731180  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 09:52:00.731268  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 09:52:00.843480  254588 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 09:52:00.843728  254588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1213 09:52:00.843787  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 09:52:00.843825  254588 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 09:52:00.843920  254588 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 09:52:00.843996  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 09:52:00.844111  254588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 09:52:00.844170  254588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 09:52:00.846615  254588 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1213 09:52:00.846736  254588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1213 09:52:00.887643  254588 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1213 09:52:00.887802  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1213 09:52:00.887738  254588 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 09:52:00.887985  254588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 09:52:00.920939  254588 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1213 09:52:00.920997  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1213 09:52:00.921051  254588 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1213 09:52:00.921072  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1213 09:52:00.921121  254588 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1213 09:52:00.921197  254588 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 09:52:00.921245  254588 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1213 09:52:00.921261  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1213 09:52:00.921319  254588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 09:52:00.921335  254588 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1213 09:52:00.921353  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1213 09:52:00.921210  254588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1213 09:52:00.996868  254588 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1213 09:52:00.996903  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1213 09:52:00.996933  254588 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1213 09:52:00.996942  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	W1213 09:52:01.066352  254588 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1213 09:52:01.066585  254588 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1213 09:52:01.066671  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:52:01.165061  254588 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1213 09:52:01.165487  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1213 09:52:01.201469  254588 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1213 09:52:01.201981  254588 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:52:01.202072  254588 ssh_runner.go:195] Run: which crictl
	I1213 09:52:01.528014  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:52:01.528115  254588 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1213 09:52:01.528164  254588 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 09:52:01.528210  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 09:52:02.877417  254588 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.34932585s)
	I1213 09:52:02.877452  254588 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.349211872s)
	I1213 09:52:02.877472  254588 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1213 09:52:02.877490  254588 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1213 09:52:02.877501  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:52:02.877526  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1213 09:52:04.004255  254588 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.126702945s)
	I1213 09:52:04.004335  254588 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1213 09:52:04.004374  254588 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 09:52:04.004469  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 09:52:04.004947  254588 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.127425955s)
	I1213 09:52:04.005089  254588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:52:04.050660  254588 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 09:52:04.050782  254588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1213 09:52:05.086545  254588 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.082037303s)
	I1213 09:52:05.086574  254588 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1213 09:52:05.086592  254588 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 09:52:05.086642  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 09:52:05.086670  254588 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.035855799s)
	I1213 09:52:05.086732  254588 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1213 09:52:05.086801  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1213 09:52:06.157420  254588 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.07073875s)
	I1213 09:52:06.157488  254588 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1213 09:52:06.157527  254588 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1213 09:52:06.157605  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1213 09:52:07.628561  254588 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.470913994s)
	I1213 09:52:07.628586  254588 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1213 09:52:07.628608  254588 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 09:52:07.628657  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 09:52:08.754640  254588 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.12595641s)
	I1213 09:52:08.754664  254588 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1213 09:52:08.754717  254588 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 09:52:08.754788  254588 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1213 09:52:09.151242  254588 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 09:52:09.151328  254588 cache_images.go:125] Successfully loaded all cached images
	I1213 09:52:09.151349  254588 cache_images.go:94] duration metric: took 9.371586064s to LoadCachedImages
	I1213 09:52:09.151386  254588 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 09:52:09.151564  254588 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-328069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:52:09.151652  254588 ssh_runner.go:195] Run: sudo crictl info
	I1213 09:52:09.185258  254588 cni.go:84] Creating CNI manager for ""
	I1213 09:52:09.185284  254588 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:52:09.185304  254588 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:52:09.185327  254588 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-328069 NodeName:no-preload-328069 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:52:09.185477  254588 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-328069"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:52:09.185561  254588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:52:09.193937  254588 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1213 09:52:09.194014  254588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:52:09.202772  254588 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1213 09:52:09.202870  254588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1213 09:52:09.204542  254588 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1213 09:52:09.204721  254588 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1213 09:52:09.206822  254588 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1213 09:52:09.206849  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1213 09:52:09.940138  254588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:52:09.964019  254588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1213 09:52:09.973099  254588 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1213 09:52:09.973177  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1213 09:52:10.150039  254588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1213 09:52:10.164494  254588 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1213 09:52:10.164535  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1213 09:52:10.682168  254588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:52:10.690881  254588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 09:52:10.707413  254588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:52:10.722898  254588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 09:52:10.737607  254588 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:52:10.741367  254588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:52:10.753902  254588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:52:10.876488  254588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:52:10.901093  254588 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069 for IP: 192.168.76.2
	I1213 09:52:10.901168  254588 certs.go:195] generating shared ca certs ...
	I1213 09:52:10.901202  254588 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:52:10.901418  254588 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 09:52:10.901494  254588 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 09:52:10.901531  254588 certs.go:257] generating profile certs ...
	I1213 09:52:10.901612  254588 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.key
	I1213 09:52:10.901651  254588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt with IP's: []
	I1213 09:52:11.176951  254588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt ...
	I1213 09:52:11.176982  254588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt: {Name:mk1fa2b17620af750e9a15016d6a85bb575b9b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:52:11.177181  254588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.key ...
	I1213 09:52:11.177197  254588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.key: {Name:mk2fec1ce62d9948a4644e56fe7cbd9f23447d1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:52:11.177278  254588 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.key.f5afe91a
	I1213 09:52:11.177298  254588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.crt.f5afe91a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 09:52:11.641208  254588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.crt.f5afe91a ...
	I1213 09:52:11.641236  254588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.crt.f5afe91a: {Name:mk47c30e155bec1f4f8cfe359fe54409664abc85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:52:11.641440  254588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.key.f5afe91a ...
	I1213 09:52:11.641460  254588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.key.f5afe91a: {Name:mk9e762a8f36a668f51b45f99b8bc4ada793c95c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:52:11.641971  254588 certs.go:382] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.crt.f5afe91a -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.crt
	I1213 09:52:11.642059  254588 certs.go:386] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.key.f5afe91a -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.key
	I1213 09:52:11.642122  254588 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.key
	I1213 09:52:11.642144  254588 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.crt with IP's: []
	I1213 09:52:11.743776  254588 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.crt ...
	I1213 09:52:11.743806  254588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.crt: {Name:mk7ac9931d6578dde2e74754936c14294f9e0558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:52:11.744381  254588 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.key ...
	I1213 09:52:11.744402  254588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.key: {Name:mk9409b450f58fa177ab1b4d6ed5b2a68b84438e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:52:11.744603  254588 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 09:52:11.744653  254588 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 09:52:11.744667  254588 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:52:11.744696  254588 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:52:11.744727  254588 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:52:11.744760  254588 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 09:52:11.744810  254588 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:52:11.745378  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:52:11.764560  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:52:11.789608  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:52:11.807090  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 09:52:11.824602  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:52:11.844599  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:52:11.864997  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:52:11.883848  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:52:11.901874  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 09:52:11.920694  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:52:11.938169  254588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 09:52:11.956713  254588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:52:11.969292  254588 ssh_runner.go:195] Run: openssl version
	I1213 09:52:11.975678  254588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:52:11.984539  254588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:52:11.992853  254588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:52:11.996974  254588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:52:11.997058  254588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:52:12.039769  254588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:52:12.048493  254588 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 09:52:12.056348  254588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 09:52:12.064224  254588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 09:52:12.071944  254588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 09:52:12.075778  254588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 09:52:12.075866  254588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 09:52:12.117319  254588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:52:12.125063  254588 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
	I1213 09:52:12.136081  254588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 09:52:12.144840  254588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 09:52:12.153988  254588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 09:52:12.157874  254588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 09:52:12.157942  254588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 09:52:12.199251  254588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:52:12.207283  254588 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
	I1213 09:52:12.214975  254588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:52:12.218651  254588 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 09:52:12.218744  254588 kubeadm.go:401] StartCluster: {Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:52:12.218836  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 09:52:12.218908  254588 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:52:12.245622  254588 cri.go:89] found id: ""
	I1213 09:52:12.245748  254588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:52:12.253858  254588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:52:12.263886  254588 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:52:12.263964  254588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:52:12.272297  254588 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:52:12.272319  254588 kubeadm.go:158] found existing configuration files:
	
	I1213 09:52:12.272378  254588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:52:12.279990  254588 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:52:12.280056  254588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:52:12.287424  254588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:52:12.295192  254588 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:52:12.295268  254588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:52:12.302456  254588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:52:12.310401  254588 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:52:12.310466  254588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:52:12.318035  254588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:52:12.326016  254588 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:52:12.326089  254588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:52:12.333998  254588 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:52:12.374194  254588 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:52:12.374440  254588 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:52:12.448881  254588 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:52:12.448969  254588 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 09:52:12.449011  254588 kubeadm.go:319] OS: Linux
	I1213 09:52:12.449060  254588 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:52:12.449112  254588 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:52:12.449167  254588 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:52:12.449222  254588 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:52:12.449296  254588 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:52:12.449349  254588 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:52:12.449398  254588 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:52:12.449453  254588 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:52:12.449504  254588 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:52:12.529680  254588 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:52:12.529799  254588 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:52:12.529895  254588 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:52:12.537377  254588 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:52:12.544579  254588 out.go:252]   - Generating certificates and keys ...
	I1213 09:52:12.544695  254588 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:52:12.544788  254588 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:52:12.688589  254588 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 09:52:12.871846  254588 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 09:52:13.127120  254588 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 09:52:13.222030  254588 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 09:52:13.342830  254588 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 09:52:13.343466  254588 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-328069] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 09:52:13.678860  254588 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 09:52:13.679278  254588 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-328069] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 09:52:14.278008  254588 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 09:52:14.334087  254588 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 09:52:14.563413  254588 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 09:52:14.563719  254588 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:52:14.784967  254588 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:52:14.994594  254588 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:52:15.243677  254588 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:52:15.477699  254588 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:52:15.648315  254588 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:52:15.649124  254588 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:52:15.651925  254588 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:52:15.671709  254588 out.go:252]   - Booting up control plane ...
	I1213 09:52:15.671833  254588 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:52:15.671919  254588 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:52:15.671990  254588 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:52:15.690939  254588 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:52:15.691266  254588 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:52:15.704082  254588 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:52:15.704377  254588 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:52:15.704598  254588 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:52:15.854356  254588 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:52:15.854494  254588 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:56:15.855576  254588 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001232553s
	I1213 09:56:15.855613  254588 kubeadm.go:319] 
	I1213 09:56:15.855715  254588 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 09:56:15.855773  254588 kubeadm.go:319] 	- The kubelet is not running
	I1213 09:56:15.856111  254588 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 09:56:15.856121  254588 kubeadm.go:319] 
	I1213 09:56:15.856310  254588 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 09:56:15.856609  254588 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 09:56:15.856667  254588 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 09:56:15.856671  254588 kubeadm.go:319] 
	I1213 09:56:15.861536  254588 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:56:15.862280  254588 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:56:15.862472  254588 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:56:15.862909  254588 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 09:56:15.862923  254588 kubeadm.go:319] 
	I1213 09:56:15.863040  254588 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 09:56:15.863174  254588 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-328069] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-328069] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001232553s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-328069] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-328069] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001232553s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 09:56:15.863267  254588 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 09:56:16.279386  254588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:56:16.293592  254588 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:56:16.293662  254588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:56:16.301392  254588 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:56:16.301414  254588 kubeadm.go:158] found existing configuration files:
	
	I1213 09:56:16.301472  254588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:56:16.309020  254588 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:56:16.309082  254588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:56:16.316874  254588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:56:16.324243  254588 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:56:16.324355  254588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:56:16.331712  254588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:56:16.339855  254588 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:56:16.339919  254588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:56:16.346882  254588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:56:16.354658  254588 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:56:16.354723  254588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:56:16.362104  254588 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:56:16.401704  254588 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 09:56:16.401859  254588 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:56:16.476290  254588 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:56:16.476366  254588 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 09:56:16.476409  254588 kubeadm.go:319] OS: Linux
	I1213 09:56:16.476482  254588 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:56:16.476600  254588 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 09:56:16.476694  254588 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:56:16.476778  254588 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:56:16.476858  254588 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:56:16.476941  254588 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:56:16.477027  254588 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:56:16.477133  254588 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:56:16.477214  254588 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 09:56:16.540626  254588 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:56:16.540852  254588 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:56:16.540998  254588 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:56:16.549894  254588 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 09:56:16.555099  254588 out.go:252]   - Generating certificates and keys ...
	I1213 09:56:16.555226  254588 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 09:56:16.555302  254588 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 09:56:16.555388  254588 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 09:56:16.555460  254588 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 09:56:16.555602  254588 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 09:56:16.555666  254588 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 09:56:16.555744  254588 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 09:56:16.555815  254588 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 09:56:16.555905  254588 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 09:56:16.555993  254588 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 09:56:16.556037  254588 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 09:56:16.556102  254588 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 09:56:16.947888  254588 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 09:56:17.583360  254588 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 09:56:18.126914  254588 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 09:56:18.265900  254588 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 09:56:18.451873  254588 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 09:56:18.452399  254588 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 09:56:18.456279  254588 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 09:56:18.459613  254588 out.go:252]   - Booting up control plane ...
	I1213 09:56:18.459714  254588 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:56:18.459791  254588 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:56:18.460546  254588 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:56:18.481610  254588 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:56:18.481928  254588 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:56:18.490395  254588 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:56:18.490742  254588 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:56:18.490826  254588 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:56:18.650239  254588 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:56:18.650453  254588 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:00:18.646921  254588 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000078394s
	I1213 10:00:18.646949  254588 kubeadm.go:319] 
	I1213 10:00:18.647006  254588 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:00:18.647040  254588 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:00:18.647145  254588 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:00:18.647149  254588 kubeadm.go:319] 
	I1213 10:00:18.647253  254588 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:00:18.647285  254588 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:00:18.647316  254588 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:00:18.647320  254588 kubeadm.go:319] 
	I1213 10:00:18.652540  254588 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:00:18.653297  254588 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:00:18.653496  254588 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:00:18.653975  254588 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 10:00:18.653988  254588 kubeadm.go:319] 
	I1213 10:00:18.654109  254588 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:00:18.654176  254588 kubeadm.go:403] duration metric: took 8m6.435468168s to StartCluster
	I1213 10:00:18.654233  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:00:18.654307  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:00:18.680404  254588 cri.go:89] found id: ""
	I1213 10:00:18.680438  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.680448  254588 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:00:18.680454  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:00:18.680527  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:00:18.704694  254588 cri.go:89] found id: ""
	I1213 10:00:18.704765  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.704788  254588 logs.go:284] No container was found matching "etcd"
	I1213 10:00:18.704803  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:00:18.704886  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:00:18.732906  254588 cri.go:89] found id: ""
	I1213 10:00:18.732932  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.732942  254588 logs.go:284] No container was found matching "coredns"
	I1213 10:00:18.732949  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:00:18.733006  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:00:18.758531  254588 cri.go:89] found id: ""
	I1213 10:00:18.758558  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.758567  254588 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:00:18.758574  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:00:18.758643  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:00:18.787111  254588 cri.go:89] found id: ""
	I1213 10:00:18.787138  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.787147  254588 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:00:18.787153  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:00:18.787211  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:00:18.814001  254588 cri.go:89] found id: ""
	I1213 10:00:18.814025  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.814034  254588 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:00:18.814041  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:00:18.814115  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:00:18.842019  254588 cri.go:89] found id: ""
	I1213 10:00:18.842046  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.842059  254588 logs.go:284] No container was found matching "kindnet"
	I1213 10:00:18.842096  254588 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:00:18.842115  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:00:18.905936  254588 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:00:18.899124    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.899561    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901056    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901383    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.902854    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:00:18.899124    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.899561    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901056    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901383    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.902854    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:00:18.905963  254588 logs.go:123] Gathering logs for containerd ...
	I1213 10:00:18.905977  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:00:18.949644  254588 logs.go:123] Gathering logs for container status ...
	I1213 10:00:18.949677  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:00:18.977252  254588 logs.go:123] Gathering logs for kubelet ...
	I1213 10:00:18.977281  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:00:19.035838  254588 logs.go:123] Gathering logs for dmesg ...
	I1213 10:00:19.035876  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 10:00:19.049572  254588 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:00:19.049623  254588 out.go:285] * 
	* 
	W1213 10:00:19.049685  254588 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:00:19.049702  254588 out.go:285] * 
	* 
	W1213 10:00:19.051871  254588 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:00:19.057079  254588 out.go:203] 
	W1213 10:00:19.061004  254588 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:00:19.061054  254588 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:00:19.061074  254588 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:00:19.064330  254588 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-328069
helpers_test.go:244: (dbg) docker inspect no-preload-328069:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	        "Created": "2025-12-13T09:51:52.758803928Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 254898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:51:52.8299513Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hosts",
	        "LogPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f-json.log",
	        "Name": "/no-preload-328069",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-328069:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-328069",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	                "LowerDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-328069",
	                "Source": "/var/lib/docker/volumes/no-preload-328069/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-328069",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-328069",
	                "name.minikube.sigs.k8s.io": "no-preload-328069",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c2a9ce40eddef38103a6cf9a5059be6d55a21e5d26f2dcd09256f4d6e4e169b",
	            "SandboxKey": "/var/run/docker/netns/0c2a9ce40edd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-328069": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:15:2e:f9:55:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73962579d63346bb2ce9ff6da0af02a105fd1aac4b494f0d3fbfb09d208e1ce7",
	                    "EndpointID": "14441a2b315a1f21a464e01d546592920a40d2eff4ecca4a3389aa3acc59dd14",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-328069",
	                        "fe2aab224d92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069: exit status 6 (351.202192ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:00:19.498052  276094 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-328069" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-328069 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-640993                                                                                                                                                                                                                                  │ old-k8s-version-640993       │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:52 UTC │
	│ delete  │ -p kubernetes-upgrade-355809                                                                                                                                                                                                                               │ kubernetes-upgrade-355809    │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ delete  │ -p disable-driver-mounts-130854                                                                                                                                                                                                                            │ disable-driver-mounts-130854 │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ start   │ -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-238987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ stop    │ -p embed-certs-238987 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-238987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:53 UTC │
	│ image   │ embed-certs-238987 image list --format=json                                                                                                                                                                                                                │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-544967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ stop    │ -p default-k8s-diff-port-544967 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-544967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-544967 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:56:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:56:39.477521  271045 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:56:39.477696  271045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:56:39.477728  271045 out.go:374] Setting ErrFile to fd 2...
	I1213 09:56:39.477749  271045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:56:39.478026  271045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:56:39.478473  271045 out.go:368] Setting JSON to false
	I1213 09:56:39.479400  271045 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5952,"bootTime":1765613848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:56:39.479497  271045 start.go:143] virtualization:  
	I1213 09:56:39.483651  271045 out.go:179] * [newest-cni-987495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:56:39.488083  271045 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:56:39.488164  271045 notify.go:221] Checking for updates...
	I1213 09:56:39.494770  271045 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:56:39.497855  271045 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:56:39.500958  271045 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:56:39.504012  271045 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:56:39.507152  271045 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:56:39.510591  271045 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:56:39.510687  271045 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:56:39.534137  271045 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:56:39.534252  271045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:56:39.597640  271045 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:56:39.588587407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:56:39.597744  271045 docker.go:319] overlay module found
	I1213 09:56:39.602972  271045 out.go:179] * Using the docker driver based on user configuration
	I1213 09:56:39.605905  271045 start.go:309] selected driver: docker
	I1213 09:56:39.605926  271045 start.go:927] validating driver "docker" against <nil>
	I1213 09:56:39.605939  271045 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:56:39.606668  271045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:56:39.659228  271045 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:56:39.649874797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:56:39.659395  271045 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 09:56:39.659424  271045 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 09:56:39.659705  271045 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:56:39.662610  271045 out.go:179] * Using Docker driver with root privileges
	I1213 09:56:39.665424  271045 cni.go:84] Creating CNI manager for ""
	I1213 09:56:39.665484  271045 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:56:39.665497  271045 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 09:56:39.665588  271045 start.go:353] cluster config:
	{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:56:39.668716  271045 out.go:179] * Starting "newest-cni-987495" primary control-plane node in "newest-cni-987495" cluster
	I1213 09:56:39.671669  271045 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 09:56:39.674572  271045 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:56:39.677446  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:39.677492  271045 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 09:56:39.677522  271045 cache.go:65] Caching tarball of preloaded images
	I1213 09:56:39.677617  271045 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 09:56:39.677632  271045 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 09:56:39.677739  271045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 09:56:39.677763  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json: {Name:mkb4456221b0cea9f33fc0d473e380a268794011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:39.677865  271045 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:56:39.696673  271045 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:56:39.696697  271045 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:56:39.696712  271045 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:56:39.696745  271045 start.go:360] acquireMachinesLock for newest-cni-987495: {Name:mk0b05e51288e33ec02181c33b2cba54230603c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:56:39.696846  271045 start.go:364] duration metric: took 80.821µs to acquireMachinesLock for "newest-cni-987495"
	I1213 09:56:39.696875  271045 start.go:93] Provisioning new machine with config: &{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 09:56:39.696947  271045 start.go:125] createHost starting for "" (driver="docker")
	I1213 09:56:39.700273  271045 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 09:56:39.700478  271045 start.go:159] libmachine.API.Create for "newest-cni-987495" (driver="docker")
	I1213 09:56:39.700510  271045 client.go:173] LocalClient.Create starting
	I1213 09:56:39.700595  271045 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem
	I1213 09:56:39.700636  271045 main.go:143] libmachine: Decoding PEM data...
	I1213 09:56:39.700653  271045 main.go:143] libmachine: Parsing certificate...
	I1213 09:56:39.700719  271045 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem
	I1213 09:56:39.700738  271045 main.go:143] libmachine: Decoding PEM data...
	I1213 09:56:39.700753  271045 main.go:143] libmachine: Parsing certificate...
	I1213 09:56:39.701087  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 09:56:39.716190  271045 cli_runner.go:211] docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 09:56:39.716263  271045 network_create.go:284] running [docker network inspect newest-cni-987495] to gather additional debugging logs...
	I1213 09:56:39.716283  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495
	W1213 09:56:39.730822  271045 cli_runner.go:211] docker network inspect newest-cni-987495 returned with exit code 1
	I1213 09:56:39.730850  271045 network_create.go:287] error running [docker network inspect newest-cni-987495]: docker network inspect newest-cni-987495: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-987495 not found
	I1213 09:56:39.730864  271045 network_create.go:289] output of [docker network inspect newest-cni-987495]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-987495 not found
	
	** /stderr **
	I1213 09:56:39.730969  271045 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:56:39.748226  271045 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c365c32601a1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:cc:58:ff:fb:ac} reservation:<nil>}
	I1213 09:56:39.748572  271045 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5d2e6ae00d26 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:06:4d:2a:bb:ba} reservation:<nil>}
	I1213 09:56:39.748888  271045 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f516d686012e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:e6:44:cc:3a:d0} reservation:<nil>}
	I1213 09:56:39.749141  271045 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-73962579d633 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:d5:99:41:ca:7d} reservation:<nil>}
	I1213 09:56:39.749577  271045 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b7880}
	I1213 09:56:39.749602  271045 network_create.go:124] attempt to create docker network newest-cni-987495 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 09:56:39.749657  271045 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-987495 newest-cni-987495
	I1213 09:56:39.818534  271045 network_create.go:108] docker network newest-cni-987495 192.168.85.0/24 created
	I1213 09:56:39.818580  271045 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-987495" container
	I1213 09:56:39.818658  271045 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 09:56:39.841206  271045 cli_runner.go:164] Run: docker volume create newest-cni-987495 --label name.minikube.sigs.k8s.io=newest-cni-987495 --label created_by.minikube.sigs.k8s.io=true
	I1213 09:56:39.859131  271045 oci.go:103] Successfully created a docker volume newest-cni-987495
	I1213 09:56:39.859232  271045 cli_runner.go:164] Run: docker run --rm --name newest-cni-987495-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-987495 --entrypoint /usr/bin/test -v newest-cni-987495:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 09:56:40.390762  271045 oci.go:107] Successfully prepared a docker volume newest-cni-987495
	I1213 09:56:40.390831  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:40.390845  271045 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 09:56:40.390916  271045 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-987495:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 09:56:44.612485  271045 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-987495:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.221527325s)
	I1213 09:56:44.612518  271045 kic.go:203] duration metric: took 4.221669898s to extract preloaded images to volume ...
	W1213 09:56:44.612667  271045 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 09:56:44.612789  271045 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 09:56:44.665912  271045 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-987495 --name newest-cni-987495 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-987495 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-987495 --network newest-cni-987495 --ip 192.168.85.2 --volume newest-cni-987495:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 09:56:44.956868  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Running}}
	I1213 09:56:44.977125  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:44.997663  271045 cli_runner.go:164] Run: docker exec newest-cni-987495 stat /var/lib/dpkg/alternatives/iptables
	I1213 09:56:45.071335  271045 oci.go:144] the created container "newest-cni-987495" has a running status.
	I1213 09:56:45.071378  271045 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa...
	I1213 09:56:45.174388  271045 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 09:56:45.225815  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:45.265949  271045 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 09:56:45.265980  271045 kic_runner.go:114] Args: [docker exec --privileged newest-cni-987495 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 09:56:45.330610  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:45.353288  271045 machine.go:94] provisionDockerMachine start ...
	I1213 09:56:45.353380  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:45.380805  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:45.381141  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:45.381150  271045 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:56:45.381824  271045 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 09:56:48.535017  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 09:56:48.535041  271045 ubuntu.go:182] provisioning hostname "newest-cni-987495"
	I1213 09:56:48.535116  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:48.552976  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:48.553289  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:48.553308  271045 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-987495 && echo "newest-cni-987495" | sudo tee /etc/hostname
	I1213 09:56:48.715838  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 09:56:48.716003  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:48.735300  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:48.735636  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:48.735659  271045 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-987495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-987495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-987495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:56:48.887956  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:56:48.887983  271045 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 09:56:48.888015  271045 ubuntu.go:190] setting up certificates
	I1213 09:56:48.888025  271045 provision.go:84] configureAuth start
	I1213 09:56:48.888083  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:48.904758  271045 provision.go:143] copyHostCerts
	I1213 09:56:48.904824  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 09:56:48.904839  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 09:56:48.904928  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 09:56:48.905026  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 09:56:48.905037  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 09:56:48.905066  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 09:56:48.905132  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 09:56:48.905142  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 09:56:48.905168  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 09:56:48.905218  271045 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.newest-cni-987495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-987495]
	I1213 09:56:49.148109  271045 provision.go:177] copyRemoteCerts
	I1213 09:56:49.148175  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:56:49.148216  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.167297  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.275554  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:56:49.293524  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:56:49.311255  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 09:56:49.328583  271045 provision.go:87] duration metric: took 440.545309ms to configureAuth
	I1213 09:56:49.328607  271045 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:56:49.328807  271045 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:56:49.328820  271045 machine.go:97] duration metric: took 3.97550235s to provisionDockerMachine
	I1213 09:56:49.328826  271045 client.go:176] duration metric: took 9.628307523s to LocalClient.Create
	I1213 09:56:49.328840  271045 start.go:167] duration metric: took 9.628363097s to libmachine.API.Create "newest-cni-987495"
	I1213 09:56:49.328847  271045 start.go:293] postStartSetup for "newest-cni-987495" (driver="docker")
	I1213 09:56:49.328857  271045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:56:49.328908  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:56:49.328944  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.345687  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.452617  271045 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:56:49.456102  271045 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:56:49.456132  271045 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:56:49.456144  271045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 09:56:49.456197  271045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 09:56:49.456275  271045 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 09:56:49.456381  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:56:49.464374  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:56:49.482803  271045 start.go:296] duration metric: took 153.942655ms for postStartSetup
	I1213 09:56:49.483179  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:49.501288  271045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 09:56:49.501569  271045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:56:49.501608  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.519643  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.620541  271045 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:56:49.625365  271045 start.go:128] duration metric: took 9.928403278s to createHost
	I1213 09:56:49.625389  271045 start.go:83] releasing machines lock for "newest-cni-987495", held for 9.928529598s
	I1213 09:56:49.625471  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:49.645994  271045 ssh_runner.go:195] Run: cat /version.json
	I1213 09:56:49.646048  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.646301  271045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:56:49.646369  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.671756  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.687696  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.781881  271045 ssh_runner.go:195] Run: systemctl --version
	I1213 09:56:49.881841  271045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:56:49.886330  271045 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:56:49.886436  271045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:56:49.913764  271045 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 09:56:49.913793  271045 start.go:496] detecting cgroup driver to use...
	I1213 09:56:49.913826  271045 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:56:49.913873  271045 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 09:56:49.928737  271045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 09:56:49.941512  271045 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:56:49.941581  271045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:56:49.958476  271045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:56:49.976657  271045 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:56:50.092571  271045 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:56:50.215484  271045 docker.go:234] disabling docker service ...
	I1213 09:56:50.215599  271045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:56:50.236595  271045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:56:50.249894  271045 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:56:50.372863  271045 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:56:50.492030  271045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:56:50.505104  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:56:50.520463  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 09:56:50.530400  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 09:56:50.539863  271045 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 09:56:50.539979  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 09:56:50.549222  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:56:50.558350  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 09:56:50.567652  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:56:50.576927  271045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:56:50.585862  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 09:56:50.595196  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 09:56:50.604766  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 09:56:50.613925  271045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:56:50.621385  271045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:56:50.629064  271045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:56:50.735877  271045 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 09:56:50.857747  271045 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 09:56:50.857827  271045 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 09:56:50.861671  271045 start.go:564] Will wait 60s for crictl version
	I1213 09:56:50.861742  271045 ssh_runner.go:195] Run: which crictl
	I1213 09:56:50.865238  271045 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:56:50.887066  271045 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 09:56:50.887150  271045 ssh_runner.go:195] Run: containerd --version
	I1213 09:56:50.905856  271045 ssh_runner.go:195] Run: containerd --version
	I1213 09:56:50.933984  271045 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 09:56:50.936956  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:56:50.952566  271045 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 09:56:50.956629  271045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:56:50.969533  271045 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 09:56:50.972467  271045 kubeadm.go:884] updating cluster {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:56:50.972618  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:50.972704  271045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:56:50.996202  271045 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 09:56:50.996226  271045 containerd.go:534] Images already preloaded, skipping extraction
	I1213 09:56:50.996284  271045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:56:51.022962  271045 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 09:56:51.022986  271045 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:56:51.022994  271045 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 09:56:51.023092  271045 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-987495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:56:51.023168  271045 ssh_runner.go:195] Run: sudo crictl info
	I1213 09:56:51.048658  271045 cni.go:84] Creating CNI manager for ""
	I1213 09:56:51.048683  271045 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:56:51.048705  271045 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 09:56:51.048728  271045 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-987495 NodeName:newest-cni-987495 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:56:51.048850  271045 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-987495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:56:51.048925  271045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:56:51.056725  271045 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:56:51.056795  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:56:51.064442  271045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 09:56:51.077624  271045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:56:51.090906  271045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 09:56:51.103635  271045 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:56:51.107116  271045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:56:51.116647  271045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:56:51.221976  271045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:56:51.239889  271045 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495 for IP: 192.168.85.2
	I1213 09:56:51.239918  271045 certs.go:195] generating shared ca certs ...
	I1213 09:56:51.239935  271045 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.240136  271045 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 09:56:51.240196  271045 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 09:56:51.240208  271045 certs.go:257] generating profile certs ...
	I1213 09:56:51.240266  271045 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key
	I1213 09:56:51.240284  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt with IP's: []
	I1213 09:56:51.511583  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt ...
	I1213 09:56:51.511617  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt: {Name:mk5464ab31f64983cb0e8dc71ff54579969d5d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.511818  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key ...
	I1213 09:56:51.511831  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key: {Name:mke550d3f89d3ec2570e79fb5b504a6e90138b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.511927  271045 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e
	I1213 09:56:51.511944  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 09:56:51.643285  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e ...
	I1213 09:56:51.643317  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e: {Name:mk6d3f18d3edc92465fdf76beebc6a34d454297c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.644306  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e ...
	I1213 09:56:51.644326  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e: {Name:mk3fa19df9059a7cd289477f6e36bd1b8a8de61f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.644427  271045 certs.go:382] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt
	I1213 09:56:51.644510  271045 certs.go:386] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key
	I1213 09:56:51.644572  271045 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key
	I1213 09:56:51.644592  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt with IP's: []
	I1213 09:56:51.762782  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt ...
	I1213 09:56:51.762818  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt: {Name:mkc4655600dc8f487ec74e9635d5a6c0aaea04b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.763666  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key ...
	I1213 09:56:51.763686  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key: {Name:mkfc1bfb8023d67db678ef417275fa70be4e1a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.764520  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 09:56:51.764583  271045 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 09:56:51.764597  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:56:51.764630  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:56:51.764665  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:56:51.764701  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 09:56:51.764754  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:56:51.765415  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:56:51.785100  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:56:51.803674  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:56:51.820883  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 09:56:51.838678  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:56:51.855995  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:56:51.873808  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:56:51.891156  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:56:51.908530  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 09:56:51.925774  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:56:51.943306  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 09:56:51.959997  271045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:56:51.972782  271045 ssh_runner.go:195] Run: openssl version
	I1213 09:56:51.978921  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.986461  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 09:56:51.993616  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.997401  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.997462  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 09:56:52.049678  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:56:52.070937  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
	I1213 09:56:52.084538  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.094354  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 09:56:52.106583  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.110602  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.110668  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.153509  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:56:52.160769  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
	I1213 09:56:52.168129  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.175195  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:56:52.182476  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.186073  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.186133  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.226828  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:56:52.234290  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 09:56:52.241627  271045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:56:52.245109  271045 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 09:56:52.245166  271045 kubeadm.go:401] StartCluster: {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:56:52.245251  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 09:56:52.245315  271045 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:56:52.270271  271045 cri.go:89] found id: ""
	I1213 09:56:52.270344  271045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:56:52.278009  271045 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:56:52.285767  271045 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:56:52.285833  271045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:56:52.293380  271045 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:56:52.293416  271045 kubeadm.go:158] found existing configuration files:
	
	I1213 09:56:52.293469  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:56:52.301136  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:56:52.301228  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:56:52.308290  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:56:52.315693  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:56:52.315758  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:56:52.323086  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:56:52.330869  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:56:52.330965  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:56:52.338261  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:56:52.345809  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:56:52.345871  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:56:52.353258  271045 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:56:52.470124  271045 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:56:52.470684  271045 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:56:52.537914  271045 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:00:18.646921  254588 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000078394s
	I1213 10:00:18.646949  254588 kubeadm.go:319] 
	I1213 10:00:18.647006  254588 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:00:18.647040  254588 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:00:18.647145  254588 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:00:18.647149  254588 kubeadm.go:319] 
	I1213 10:00:18.647253  254588 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:00:18.647285  254588 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:00:18.647316  254588 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:00:18.647320  254588 kubeadm.go:319] 
	I1213 10:00:18.652540  254588 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:00:18.653297  254588 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:00:18.653496  254588 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:00:18.653975  254588 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 10:00:18.653988  254588 kubeadm.go:319] 
	I1213 10:00:18.654109  254588 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:00:18.654176  254588 kubeadm.go:403] duration metric: took 8m6.435468168s to StartCluster
	I1213 10:00:18.654233  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:00:18.654307  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:00:18.680404  254588 cri.go:89] found id: ""
	I1213 10:00:18.680438  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.680448  254588 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:00:18.680454  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:00:18.680527  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:00:18.704694  254588 cri.go:89] found id: ""
	I1213 10:00:18.704765  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.704788  254588 logs.go:284] No container was found matching "etcd"
	I1213 10:00:18.704803  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:00:18.704886  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:00:18.732906  254588 cri.go:89] found id: ""
	I1213 10:00:18.732932  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.732942  254588 logs.go:284] No container was found matching "coredns"
	I1213 10:00:18.732949  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:00:18.733006  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:00:18.758531  254588 cri.go:89] found id: ""
	I1213 10:00:18.758558  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.758567  254588 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:00:18.758574  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:00:18.758643  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:00:18.787111  254588 cri.go:89] found id: ""
	I1213 10:00:18.787138  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.787147  254588 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:00:18.787153  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:00:18.787211  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:00:18.814001  254588 cri.go:89] found id: ""
	I1213 10:00:18.814025  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.814034  254588 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:00:18.814041  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:00:18.814115  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:00:18.842019  254588 cri.go:89] found id: ""
	I1213 10:00:18.842046  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.842059  254588 logs.go:284] No container was found matching "kindnet"
	I1213 10:00:18.842096  254588 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:00:18.842115  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:00:18.905936  254588 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:00:18.899124    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.899561    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901056    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901383    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.902854    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:00:18.899124    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.899561    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901056    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901383    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.902854    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:00:18.905963  254588 logs.go:123] Gathering logs for containerd ...
	I1213 10:00:18.905977  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:00:18.949644  254588 logs.go:123] Gathering logs for container status ...
	I1213 10:00:18.949677  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:00:18.977252  254588 logs.go:123] Gathering logs for kubelet ...
	I1213 10:00:18.977281  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:00:19.035838  254588 logs.go:123] Gathering logs for dmesg ...
	I1213 10:00:19.035876  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 10:00:19.049572  254588 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:00:19.049623  254588 out.go:285] * 
	W1213 10:00:19.049685  254588 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:00:19.049702  254588 out.go:285] * 
	W1213 10:00:19.051871  254588 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:00:19.057079  254588 out.go:203] 
	W1213 10:00:19.061004  254588 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:00:19.061054  254588 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:00:19.061074  254588 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:00:19.064330  254588 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 09:52:02 no-preload-328069 containerd[755]: time="2025-12-13T09:52:02.879804179Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:03 no-preload-328069 containerd[755]: time="2025-12-13T09:52:03.990607336Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 13 09:52:03 no-preload-328069 containerd[755]: time="2025-12-13T09:52:03.992952819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 09:52:04 no-preload-328069 containerd[755]: time="2025-12-13T09:52:04.009273066Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:04 no-preload-328069 containerd[755]: time="2025-12-13T09:52:04.010406673Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.074822736Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.077081596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.085416708Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.087033692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.147342869Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.149762354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.157038592Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.157794989Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.618593571Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.620865199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.629284660Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.630354201Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.744735165Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.746972085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.756996214Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.757622616Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.140072906Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.142312452Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.150787462Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.151785092Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:00:20.163905    5539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:20.164548    5539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:20.166121    5539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:20.166580    5539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:20.168125    5539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 10:00:20 up  1:42,  0 user,  load average: 0.86, 1.21, 1.80
	Linux no-preload-328069 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:00:17 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:00:17 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 13 10:00:17 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:17 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:17 no-preload-328069 kubelet[5342]: E1213 10:00:17.836247    5342 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:00:17 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:00:17 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:00:18 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 10:00:18 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:18 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:18 no-preload-328069 kubelet[5348]: E1213 10:00:18.578998    5348 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:00:18 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:00:18 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:00:19 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 10:00:19 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:19 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:19 no-preload-328069 kubelet[5434]: E1213 10:00:19.381468    5434 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:00:19 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:00:19 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 10:00:20 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:20 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:20 no-preload-328069 kubelet[5530]: E1213 10:00:20.106865    5530 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069: exit status 6 (316.635957ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:00:20.586265  276324 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-328069" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-328069" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (509.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (502.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 09:56:40.005413    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:57:14.443590    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:57:19.416056    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:58:51.887930    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:59:35.553489    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:03.257560    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:11.200858    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:11.207305    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:11.218762    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:11.240282    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:11.281715    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:11.363275    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:11.524928    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:11.846603    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:12.488971    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:13.770669    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:16.332082    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:17.522906    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.180453439s)

                                                
                                                
-- stdout --
	* [newest-cni-987495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-987495" primary control-plane node in "newest-cni-987495" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:56:39.477521  271045 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:56:39.477696  271045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:56:39.477728  271045 out.go:374] Setting ErrFile to fd 2...
	I1213 09:56:39.477749  271045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:56:39.478026  271045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:56:39.478473  271045 out.go:368] Setting JSON to false
	I1213 09:56:39.479400  271045 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5952,"bootTime":1765613848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:56:39.479497  271045 start.go:143] virtualization:  
	I1213 09:56:39.483651  271045 out.go:179] * [newest-cni-987495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:56:39.488083  271045 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:56:39.488164  271045 notify.go:221] Checking for updates...
	I1213 09:56:39.494770  271045 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:56:39.497855  271045 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:56:39.500958  271045 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:56:39.504012  271045 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:56:39.507152  271045 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:56:39.510591  271045 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:56:39.510687  271045 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:56:39.534137  271045 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:56:39.534252  271045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:56:39.597640  271045 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:56:39.588587407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:56:39.597744  271045 docker.go:319] overlay module found
	I1213 09:56:39.602972  271045 out.go:179] * Using the docker driver based on user configuration
	I1213 09:56:39.605905  271045 start.go:309] selected driver: docker
	I1213 09:56:39.605926  271045 start.go:927] validating driver "docker" against <nil>
	I1213 09:56:39.605939  271045 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:56:39.606668  271045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:56:39.659228  271045 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:56:39.649874797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:56:39.659395  271045 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 09:56:39.659424  271045 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 09:56:39.659705  271045 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:56:39.662610  271045 out.go:179] * Using Docker driver with root privileges
	I1213 09:56:39.665424  271045 cni.go:84] Creating CNI manager for ""
	I1213 09:56:39.665484  271045 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:56:39.665497  271045 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 09:56:39.665588  271045 start.go:353] cluster config:
	{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:56:39.668716  271045 out.go:179] * Starting "newest-cni-987495" primary control-plane node in "newest-cni-987495" cluster
	I1213 09:56:39.671669  271045 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 09:56:39.674572  271045 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:56:39.677446  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:39.677492  271045 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 09:56:39.677522  271045 cache.go:65] Caching tarball of preloaded images
	I1213 09:56:39.677617  271045 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 09:56:39.677632  271045 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 09:56:39.677739  271045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 09:56:39.677763  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json: {Name:mkb4456221b0cea9f33fc0d473e380a268794011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:39.677865  271045 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:56:39.696673  271045 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:56:39.696697  271045 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:56:39.696712  271045 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:56:39.696745  271045 start.go:360] acquireMachinesLock for newest-cni-987495: {Name:mk0b05e51288e33ec02181c33b2cba54230603c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:56:39.696846  271045 start.go:364] duration metric: took 80.821µs to acquireMachinesLock for "newest-cni-987495"
	I1213 09:56:39.696875  271045 start.go:93] Provisioning new machine with config: &{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 09:56:39.696947  271045 start.go:125] createHost starting for "" (driver="docker")
	I1213 09:56:39.700273  271045 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 09:56:39.700478  271045 start.go:159] libmachine.API.Create for "newest-cni-987495" (driver="docker")
	I1213 09:56:39.700510  271045 client.go:173] LocalClient.Create starting
	I1213 09:56:39.700595  271045 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem
	I1213 09:56:39.700636  271045 main.go:143] libmachine: Decoding PEM data...
	I1213 09:56:39.700653  271045 main.go:143] libmachine: Parsing certificate...
	I1213 09:56:39.700719  271045 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem
	I1213 09:56:39.700738  271045 main.go:143] libmachine: Decoding PEM data...
	I1213 09:56:39.700753  271045 main.go:143] libmachine: Parsing certificate...
	I1213 09:56:39.701087  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 09:56:39.716190  271045 cli_runner.go:211] docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 09:56:39.716263  271045 network_create.go:284] running [docker network inspect newest-cni-987495] to gather additional debugging logs...
	I1213 09:56:39.716283  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495
	W1213 09:56:39.730822  271045 cli_runner.go:211] docker network inspect newest-cni-987495 returned with exit code 1
	I1213 09:56:39.730850  271045 network_create.go:287] error running [docker network inspect newest-cni-987495]: docker network inspect newest-cni-987495: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-987495 not found
	I1213 09:56:39.730864  271045 network_create.go:289] output of [docker network inspect newest-cni-987495]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-987495 not found
	
	** /stderr **
	I1213 09:56:39.730969  271045 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:56:39.748226  271045 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c365c32601a1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:cc:58:ff:fb:ac} reservation:<nil>}
	I1213 09:56:39.748572  271045 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5d2e6ae00d26 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:06:4d:2a:bb:ba} reservation:<nil>}
	I1213 09:56:39.748888  271045 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f516d686012e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:e6:44:cc:3a:d0} reservation:<nil>}
	I1213 09:56:39.749141  271045 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-73962579d633 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:d5:99:41:ca:7d} reservation:<nil>}
	I1213 09:56:39.749577  271045 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b7880}
	I1213 09:56:39.749602  271045 network_create.go:124] attempt to create docker network newest-cni-987495 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 09:56:39.749657  271045 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-987495 newest-cni-987495
	I1213 09:56:39.818534  271045 network_create.go:108] docker network newest-cni-987495 192.168.85.0/24 created
	I1213 09:56:39.818580  271045 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-987495" container
	I1213 09:56:39.818658  271045 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 09:56:39.841206  271045 cli_runner.go:164] Run: docker volume create newest-cni-987495 --label name.minikube.sigs.k8s.io=newest-cni-987495 --label created_by.minikube.sigs.k8s.io=true
	I1213 09:56:39.859131  271045 oci.go:103] Successfully created a docker volume newest-cni-987495
	I1213 09:56:39.859232  271045 cli_runner.go:164] Run: docker run --rm --name newest-cni-987495-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-987495 --entrypoint /usr/bin/test -v newest-cni-987495:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 09:56:40.390762  271045 oci.go:107] Successfully prepared a docker volume newest-cni-987495
	I1213 09:56:40.390831  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:40.390845  271045 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 09:56:40.390916  271045 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-987495:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 09:56:44.612485  271045 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-987495:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.221527325s)
	I1213 09:56:44.612518  271045 kic.go:203] duration metric: took 4.221669898s to extract preloaded images to volume ...
	W1213 09:56:44.612667  271045 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 09:56:44.612789  271045 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 09:56:44.665912  271045 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-987495 --name newest-cni-987495 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-987495 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-987495 --network newest-cni-987495 --ip 192.168.85.2 --volume newest-cni-987495:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 09:56:44.956868  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Running}}
	I1213 09:56:44.977125  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:44.997663  271045 cli_runner.go:164] Run: docker exec newest-cni-987495 stat /var/lib/dpkg/alternatives/iptables
	I1213 09:56:45.071335  271045 oci.go:144] the created container "newest-cni-987495" has a running status.
	I1213 09:56:45.071378  271045 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa...
	I1213 09:56:45.174388  271045 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 09:56:45.225815  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:45.265949  271045 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 09:56:45.265980  271045 kic_runner.go:114] Args: [docker exec --privileged newest-cni-987495 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 09:56:45.330610  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:45.353288  271045 machine.go:94] provisionDockerMachine start ...
	I1213 09:56:45.353380  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:45.380805  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:45.381141  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:45.381150  271045 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:56:45.381824  271045 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 09:56:48.535017  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 09:56:48.535041  271045 ubuntu.go:182] provisioning hostname "newest-cni-987495"
	I1213 09:56:48.535116  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:48.552976  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:48.553289  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:48.553308  271045 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-987495 && echo "newest-cni-987495" | sudo tee /etc/hostname
	I1213 09:56:48.715838  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 09:56:48.716003  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:48.735300  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:48.735636  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:48.735659  271045 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-987495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-987495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-987495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:56:48.887956  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:56:48.887983  271045 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 09:56:48.888015  271045 ubuntu.go:190] setting up certificates
	I1213 09:56:48.888025  271045 provision.go:84] configureAuth start
	I1213 09:56:48.888083  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:48.904758  271045 provision.go:143] copyHostCerts
	I1213 09:56:48.904824  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 09:56:48.904839  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 09:56:48.904928  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 09:56:48.905026  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 09:56:48.905037  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 09:56:48.905066  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 09:56:48.905132  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 09:56:48.905142  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 09:56:48.905168  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 09:56:48.905218  271045 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.newest-cni-987495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-987495]
	I1213 09:56:49.148109  271045 provision.go:177] copyRemoteCerts
	I1213 09:56:49.148175  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:56:49.148216  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.167297  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.275554  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:56:49.293524  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:56:49.311255  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 09:56:49.328583  271045 provision.go:87] duration metric: took 440.545309ms to configureAuth
	I1213 09:56:49.328607  271045 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:56:49.328807  271045 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:56:49.328820  271045 machine.go:97] duration metric: took 3.97550235s to provisionDockerMachine
	I1213 09:56:49.328826  271045 client.go:176] duration metric: took 9.628307523s to LocalClient.Create
	I1213 09:56:49.328840  271045 start.go:167] duration metric: took 9.628363097s to libmachine.API.Create "newest-cni-987495"
	I1213 09:56:49.328847  271045 start.go:293] postStartSetup for "newest-cni-987495" (driver="docker")
	I1213 09:56:49.328857  271045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:56:49.328908  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:56:49.328944  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.345687  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.452617  271045 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:56:49.456102  271045 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:56:49.456132  271045 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:56:49.456144  271045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 09:56:49.456197  271045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 09:56:49.456275  271045 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 09:56:49.456381  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:56:49.464374  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:56:49.482803  271045 start.go:296] duration metric: took 153.942655ms for postStartSetup
	I1213 09:56:49.483179  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:49.501288  271045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 09:56:49.501569  271045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:56:49.501608  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.519643  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.620541  271045 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:56:49.625365  271045 start.go:128] duration metric: took 9.928403278s to createHost
	I1213 09:56:49.625389  271045 start.go:83] releasing machines lock for "newest-cni-987495", held for 9.928529598s
	I1213 09:56:49.625471  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:49.645994  271045 ssh_runner.go:195] Run: cat /version.json
	I1213 09:56:49.646048  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.646301  271045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:56:49.646369  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.671756  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.687696  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.781881  271045 ssh_runner.go:195] Run: systemctl --version
	I1213 09:56:49.881841  271045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:56:49.886330  271045 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:56:49.886436  271045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:56:49.913764  271045 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 09:56:49.913793  271045 start.go:496] detecting cgroup driver to use...
	I1213 09:56:49.913826  271045 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:56:49.913873  271045 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 09:56:49.928737  271045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 09:56:49.941512  271045 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:56:49.941581  271045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:56:49.958476  271045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:56:49.976657  271045 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:56:50.092571  271045 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:56:50.215484  271045 docker.go:234] disabling docker service ...
	I1213 09:56:50.215599  271045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:56:50.236595  271045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:56:50.249894  271045 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:56:50.372863  271045 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:56:50.492030  271045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:56:50.505104  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:56:50.520463  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 09:56:50.530400  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 09:56:50.539863  271045 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 09:56:50.539979  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 09:56:50.549222  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:56:50.558350  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 09:56:50.567652  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:56:50.576927  271045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:56:50.585862  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 09:56:50.595196  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 09:56:50.604766  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 09:56:50.613925  271045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:56:50.621385  271045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:56:50.629064  271045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:56:50.735877  271045 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 09:56:50.857747  271045 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 09:56:50.857827  271045 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 09:56:50.861671  271045 start.go:564] Will wait 60s for crictl version
	I1213 09:56:50.861742  271045 ssh_runner.go:195] Run: which crictl
	I1213 09:56:50.865238  271045 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:56:50.887066  271045 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 09:56:50.887150  271045 ssh_runner.go:195] Run: containerd --version
	I1213 09:56:50.905856  271045 ssh_runner.go:195] Run: containerd --version
	I1213 09:56:50.933984  271045 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 09:56:50.936956  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:56:50.952566  271045 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 09:56:50.956629  271045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:56:50.969533  271045 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 09:56:50.972467  271045 kubeadm.go:884] updating cluster {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:56:50.972618  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:50.972704  271045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:56:50.996202  271045 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 09:56:50.996226  271045 containerd.go:534] Images already preloaded, skipping extraction
	I1213 09:56:50.996284  271045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:56:51.022962  271045 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 09:56:51.022986  271045 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:56:51.022994  271045 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 09:56:51.023092  271045 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-987495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:56:51.023168  271045 ssh_runner.go:195] Run: sudo crictl info
	I1213 09:56:51.048658  271045 cni.go:84] Creating CNI manager for ""
	I1213 09:56:51.048683  271045 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:56:51.048705  271045 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 09:56:51.048728  271045 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-987495 NodeName:newest-cni-987495 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:56:51.048850  271045 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-987495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:56:51.048925  271045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:56:51.056725  271045 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:56:51.056795  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:56:51.064442  271045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 09:56:51.077624  271045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:56:51.090906  271045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 09:56:51.103635  271045 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:56:51.107116  271045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:56:51.116647  271045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:56:51.221976  271045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:56:51.239889  271045 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495 for IP: 192.168.85.2
	I1213 09:56:51.239918  271045 certs.go:195] generating shared ca certs ...
	I1213 09:56:51.239935  271045 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.240136  271045 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 09:56:51.240196  271045 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 09:56:51.240208  271045 certs.go:257] generating profile certs ...
	I1213 09:56:51.240266  271045 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key
	I1213 09:56:51.240284  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt with IP's: []
	I1213 09:56:51.511583  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt ...
	I1213 09:56:51.511617  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt: {Name:mk5464ab31f64983cb0e8dc71ff54579969d5d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.511818  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key ...
	I1213 09:56:51.511831  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key: {Name:mke550d3f89d3ec2570e79fb5b504a6e90138b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.511927  271045 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e
	I1213 09:56:51.511944  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 09:56:51.643285  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e ...
	I1213 09:56:51.643317  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e: {Name:mk6d3f18d3edc92465fdf76beebc6a34d454297c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.644306  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e ...
	I1213 09:56:51.644326  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e: {Name:mk3fa19df9059a7cd289477f6e36bd1b8a8de61f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.644427  271045 certs.go:382] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt
	I1213 09:56:51.644510  271045 certs.go:386] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key
	I1213 09:56:51.644572  271045 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key
	I1213 09:56:51.644592  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt with IP's: []
	I1213 09:56:51.762782  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt ...
	I1213 09:56:51.762818  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt: {Name:mkc4655600dc8f487ec74e9635d5a6c0aaea04b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.763666  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key ...
	I1213 09:56:51.763686  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key: {Name:mkfc1bfb8023d67db678ef417275fa70be4e1a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.764520  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 09:56:51.764583  271045 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 09:56:51.764597  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:56:51.764630  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:56:51.764665  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:56:51.764701  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 09:56:51.764754  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:56:51.765415  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:56:51.785100  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:56:51.803674  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:56:51.820883  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 09:56:51.838678  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:56:51.855995  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:56:51.873808  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:56:51.891156  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:56:51.908530  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 09:56:51.925774  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:56:51.943306  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 09:56:51.959997  271045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:56:51.972782  271045 ssh_runner.go:195] Run: openssl version
	I1213 09:56:51.978921  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.986461  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 09:56:51.993616  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.997401  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.997462  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 09:56:52.049678  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:56:52.070937  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
	I1213 09:56:52.084538  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.094354  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 09:56:52.106583  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.110602  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.110668  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.153509  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:56:52.160769  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
	I1213 09:56:52.168129  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.175195  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:56:52.182476  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.186073  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.186133  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.226828  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:56:52.234290  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 09:56:52.241627  271045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:56:52.245109  271045 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 09:56:52.245166  271045 kubeadm.go:401] StartCluster: {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:56:52.245251  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 09:56:52.245315  271045 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:56:52.270271  271045 cri.go:89] found id: ""
	I1213 09:56:52.270344  271045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:56:52.278009  271045 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:56:52.285767  271045 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:56:52.285833  271045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:56:52.293380  271045 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:56:52.293416  271045 kubeadm.go:158] found existing configuration files:
	
	I1213 09:56:52.293469  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:56:52.301136  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:56:52.301228  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:56:52.308290  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:56:52.315693  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:56:52.315758  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:56:52.323086  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:56:52.330869  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:56:52.330965  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:56:52.338261  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:56:52.345809  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:56:52.345871  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:56:52.353258  271045 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:56:52.470124  271045 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:56:52.470684  271045 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:56:52.537914  271045 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:00:56.509890  271045 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 10:00:56.509925  271045 kubeadm.go:319] 
	I1213 10:00:56.510001  271045 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:00:56.511602  271045 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:00:56.511668  271045 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:00:56.511767  271045 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:00:56.511830  271045 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:00:56.511910  271045 kubeadm.go:319] OS: Linux
	I1213 10:00:56.511982  271045 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:00:56.512040  271045 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:00:56.512094  271045 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:00:56.512149  271045 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:00:56.512201  271045 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:00:56.512255  271045 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:00:56.512304  271045 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:00:56.512355  271045 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:00:56.512404  271045 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:00:56.512480  271045 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:00:56.512579  271045 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:00:56.512672  271045 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:00:56.512738  271045 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:00:56.517802  271045 out.go:252]   - Generating certificates and keys ...
	I1213 10:00:56.517920  271045 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:00:56.518021  271045 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:00:56.518091  271045 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:00:56.518172  271045 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:00:56.518249  271045 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:00:56.518309  271045 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:00:56.518377  271045 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:00:56.518506  271045 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-987495] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:00:56.518570  271045 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:00:56.518698  271045 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-987495] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:00:56.518773  271045 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:00:56.518859  271045 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:00:56.518935  271045 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:00:56.519032  271045 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:00:56.519114  271045 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:00:56.519194  271045 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:00:56.519269  271045 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:00:56.519370  271045 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:00:56.519457  271045 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:00:56.519579  271045 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:00:56.519671  271045 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:00:56.522631  271045 out.go:252]   - Booting up control plane ...
	I1213 10:00:56.522739  271045 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:00:56.522840  271045 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:00:56.522914  271045 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:00:56.523056  271045 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:00:56.523185  271045 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:00:56.523309  271045 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:00:56.523400  271045 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:00:56.523445  271045 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:00:56.523640  271045 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:00:56.523773  271045 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:00:56.523846  271045 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000516844s
	I1213 10:00:56.523854  271045 kubeadm.go:319] 
	I1213 10:00:56.523912  271045 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:00:56.523948  271045 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:00:56.524055  271045 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:00:56.524063  271045 kubeadm.go:319] 
	I1213 10:00:56.524166  271045 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:00:56.524206  271045 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:00:56.524241  271045 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:00:56.524270  271045 kubeadm.go:319] 
	W1213 10:00:56.524373  271045 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-987495] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-987495] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516844s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-987495] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-987495] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516844s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:00:56.524458  271045 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 10:00:56.936054  271045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:00:56.948710  271045 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:00:56.948772  271045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:00:56.956533  271045 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:00:56.956554  271045 kubeadm.go:158] found existing configuration files:
	
	I1213 10:00:56.956624  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:00:56.964049  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:00:56.964112  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:00:56.971099  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:00:56.978635  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:00:56.978720  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:00:56.986082  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:00:56.993634  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:00:56.993701  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:00:57.003184  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:00:57.013129  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:00:57.013248  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:00:57.021455  271045 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:00:57.062115  271045 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:00:57.062425  271045 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:00:57.136560  271045 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:00:57.136636  271045 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:00:57.136678  271045 kubeadm.go:319] OS: Linux
	I1213 10:00:57.136729  271045 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:00:57.136783  271045 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:00:57.136834  271045 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:00:57.136885  271045 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:00:57.136937  271045 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:00:57.136994  271045 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:00:57.137044  271045 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:00:57.137096  271045 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:00:57.137147  271045 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:00:57.207624  271045 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:00:57.207820  271045 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:00:57.207976  271045 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:00:57.213325  271045 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:00:57.218560  271045 out.go:252]   - Generating certificates and keys ...
	I1213 10:00:57.218672  271045 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:00:57.218785  271045 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:00:57.218899  271045 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:00:57.218984  271045 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:00:57.219077  271045 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:00:57.219151  271045 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:00:57.219232  271045 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:00:57.219337  271045 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:00:57.219441  271045 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:00:57.219573  271045 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:00:57.219786  271045 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:00:57.219856  271045 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:00:57.593590  271045 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:00:58.124861  271045 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:00:58.251326  271045 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:00:58.576584  271045 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:00:58.987419  271045 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:00:58.988170  271045 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:00:58.991572  271045 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:00:58.994699  271045 out.go:252]   - Booting up control plane ...
	I1213 10:00:58.994809  271045 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:00:58.994900  271045 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:00:58.995906  271045 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:00:59.017175  271045 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:00:59.017323  271045 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:00:59.029473  271045 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:00:59.029578  271045 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:00:59.029624  271045 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:00:59.173704  271045 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:00:59.173828  271045 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:04:59.174754  271045 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001131308s
	I1213 10:04:59.174784  271045 kubeadm.go:319] 
	I1213 10:04:59.174866  271045 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:04:59.174909  271045 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:04:59.175039  271045 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:04:59.175055  271045 kubeadm.go:319] 
	I1213 10:04:59.175168  271045 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:04:59.175204  271045 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:04:59.175239  271045 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:04:59.175244  271045 kubeadm.go:319] 
	I1213 10:04:59.180339  271045 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:04:59.180784  271045 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:04:59.180907  271045 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:04:59.181153  271045 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:04:59.181164  271045 kubeadm.go:319] 
	I1213 10:04:59.181233  271045 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:04:59.181295  271045 kubeadm.go:403] duration metric: took 8m6.936133561s to StartCluster
	I1213 10:04:59.181332  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:04:59.181396  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:04:59.205616  271045 cri.go:89] found id: ""
	I1213 10:04:59.205641  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.205649  271045 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:04:59.205656  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:04:59.205723  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:04:59.230248  271045 cri.go:89] found id: ""
	I1213 10:04:59.230273  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.230283  271045 logs.go:284] No container was found matching "etcd"
	I1213 10:04:59.230289  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:04:59.230350  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:04:59.255440  271045 cri.go:89] found id: ""
	I1213 10:04:59.255466  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.255474  271045 logs.go:284] No container was found matching "coredns"
	I1213 10:04:59.255481  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:04:59.255559  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:04:59.279549  271045 cri.go:89] found id: ""
	I1213 10:04:59.279574  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.279583  271045 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:04:59.279590  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:04:59.279651  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:04:59.303984  271045 cri.go:89] found id: ""
	I1213 10:04:59.304010  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.304019  271045 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:04:59.304025  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:04:59.304093  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:04:59.328934  271045 cri.go:89] found id: ""
	I1213 10:04:59.328955  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.328964  271045 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:04:59.328970  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:04:59.329031  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:04:59.352693  271045 cri.go:89] found id: ""
	I1213 10:04:59.352718  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.352727  271045 logs.go:284] No container was found matching "kindnet"
	I1213 10:04:59.352737  271045 logs.go:123] Gathering logs for containerd ...
	I1213 10:04:59.352748  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:04:59.389714  271045 logs.go:123] Gathering logs for container status ...
	I1213 10:04:59.389747  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:04:59.417775  271045 logs.go:123] Gathering logs for kubelet ...
	I1213 10:04:59.417803  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:04:59.476095  271045 logs.go:123] Gathering logs for dmesg ...
	I1213 10:04:59.476128  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:04:59.492802  271045 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:04:59.492834  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:04:59.580211  271045 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:04:59.572159    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.572949    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.574547    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.574836    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.576315    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:04:59.572159    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.572949    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.574547    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.574836    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.576315    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 10:04:59.580238  271045 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001131308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:04:59.580298  271045 out.go:285] * 
	* 
	W1213 10:04:59.580386  271045 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001131308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001131308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:04:59.580407  271045 out.go:285] * 
	* 
	W1213 10:04:59.583250  271045 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:04:59.590673  271045 out.go:203] 
	W1213 10:04:59.593644  271045 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001131308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001131308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:04:59.594239  271045 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:04:59.594323  271045 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:04:59.597653  271045 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-987495
helpers_test.go:244: (dbg) docker inspect newest-cni-987495:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac",
	        "Created": "2025-12-13T09:56:44.68064601Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 271479,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:56:44.745643975Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/hosts",
	        "LogPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac-json.log",
	        "Name": "/newest-cni-987495",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-987495:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-987495",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac",
	                "LowerDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-987495",
	                "Source": "/var/lib/docker/volumes/newest-cni-987495/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-987495",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-987495",
	                "name.minikube.sigs.k8s.io": "newest-cni-987495",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8379243b191e450952047cb2444adc94946b4951abd396603cd88d0baeaa0bc8",
	            "SandboxKey": "/var/run/docker/netns/8379243b191e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-987495": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:c3:b8:48:db:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1cc05b29a6a537694a06e8a33e1431f6867104db51c8eb4299d9f9f07c01c4",
	                    "EndpointID": "6785b1ba4a8acc1a6b6d8f39bbe13572d604692626753d08e29f1862fd47e00f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-987495",
	                        "5d45a23b08cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-987495 -n newest-cni-987495
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-987495 -n newest-cni-987495: exit status 6 (404.529466ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:05:00.017554  283366 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-987495" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-987495 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-238987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ stop    │ -p embed-certs-238987 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-238987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:53 UTC │
	│ image   │ embed-certs-238987 image list --format=json                                                                                                                                                                                                                │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-544967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ stop    │ -p default-k8s-diff-port-544967 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-544967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-544967 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-328069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:00 UTC │                     │
	│ stop    │ -p no-preload-328069 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │ 13 Dec 25 10:02 UTC │
	│ addons  │ enable dashboard -p no-preload-328069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │ 13 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:02:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:02:11.945228  279351 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:02:11.945357  279351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:02:11.945368  279351 out.go:374] Setting ErrFile to fd 2...
	I1213 10:02:11.945373  279351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:02:11.945614  279351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 10:02:11.945995  279351 out.go:368] Setting JSON to false
	I1213 10:02:11.946845  279351 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6284,"bootTime":1765613848,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:02:11.946916  279351 start.go:143] virtualization:  
	I1213 10:02:11.952053  279351 out.go:179] * [no-preload-328069] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:02:11.955099  279351 notify.go:221] Checking for updates...
	I1213 10:02:11.955646  279351 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:02:11.958871  279351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:02:11.961865  279351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:02:11.964714  279351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 10:02:11.967733  279351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:02:11.970563  279351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:02:11.973905  279351 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:02:11.974462  279351 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:02:11.997403  279351 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:02:11.997517  279351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:02:12.056888  279351 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:02:12.046991024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:02:12.057004  279351 docker.go:319] overlay module found
	I1213 10:02:12.060124  279351 out.go:179] * Using the docker driver based on existing profile
	I1213 10:02:12.062920  279351 start.go:309] selected driver: docker
	I1213 10:02:12.062939  279351 start.go:927] validating driver "docker" against &{Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:02:12.063028  279351 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:02:12.063866  279351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:02:12.125598  279351 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:02:12.116735082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:02:12.125931  279351 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:02:12.125965  279351 cni.go:84] Creating CNI manager for ""
	I1213 10:02:12.126013  279351 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:02:12.126061  279351 start.go:353] cluster config:
	{Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:02:12.130988  279351 out.go:179] * Starting "no-preload-328069" primary control-plane node in "no-preload-328069" cluster
	I1213 10:02:12.133837  279351 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:02:12.136720  279351 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:02:12.139557  279351 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:02:12.139700  279351 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/config.json ...
	I1213 10:02:12.140016  279351 cache.go:107] acquiring lock: {Name:mk1139c6b82931eb02e4fc01be1646c4b5fb6137 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140101  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 10:02:12.140115  279351 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.272µs
	I1213 10:02:12.140129  279351 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 10:02:12.140147  279351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:02:12.140331  279351 cache.go:107] acquiring lock: {Name:mkdbfdeb98feed2961bb0c3f8a6d24ab310632c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140399  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 10:02:12.140411  279351 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 85.319µs
	I1213 10:02:12.140418  279351 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 10:02:12.140432  279351 cache.go:107] acquiring lock: {Name:mke9e3c7a7c5dbec5022163863159aa6109df603 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140467  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 10:02:12.140476  279351 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 47.475µs
	I1213 10:02:12.140483  279351 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 10:02:12.140493  279351 cache.go:107] acquiring lock: {Name:mkc53cc9694a66de0b7b66cb687f9b4074b3c86b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140525  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 10:02:12.140535  279351 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 42.659µs
	I1213 10:02:12.140542  279351 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 10:02:12.140552  279351 cache.go:107] acquiring lock: {Name:mk349a8caa03fed06b3fb3e0b39b00347dcb9b37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140580  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 10:02:12.140590  279351 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 38.45µs
	I1213 10:02:12.140596  279351 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 10:02:12.140607  279351 cache.go:107] acquiring lock: {Name:mk3eb587f4f7424524980a5884c47c318ddc6f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140639  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 10:02:12.140648  279351 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 41.723µs
	I1213 10:02:12.140653  279351 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 10:02:12.140663  279351 cache.go:107] acquiring lock: {Name:mk0e27a2c36e6dbaae7432bc4e472a6212c75814 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140693  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 10:02:12.140711  279351 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.993µs
	I1213 10:02:12.140720  279351 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 10:02:12.140730  279351 cache.go:107] acquiring lock: {Name:mk07cf085b7776efa96cbbe85a2f7495a2806d09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140801  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 10:02:12.140813  279351 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 83.981µs
	I1213 10:02:12.140820  279351 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 10:02:12.140827  279351 cache.go:87] Successfully saved all images to host disk.
	I1213 10:02:12.158842  279351 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:02:12.158865  279351 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:02:12.158888  279351 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:02:12.158915  279351 start.go:360] acquireMachinesLock for no-preload-328069: {Name:mkb27df066f9039321ce696d5a7013e52143011a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.158977  279351 start.go:364] duration metric: took 42.741µs to acquireMachinesLock for "no-preload-328069"
	I1213 10:02:12.158998  279351 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:02:12.159006  279351 fix.go:54] fixHost starting: 
	I1213 10:02:12.159253  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:12.176273  279351 fix.go:112] recreateIfNeeded on no-preload-328069: state=Stopped err=<nil>
	W1213 10:02:12.176305  279351 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:02:12.181446  279351 out.go:252] * Restarting existing docker container for "no-preload-328069" ...
	I1213 10:02:12.181532  279351 cli_runner.go:164] Run: docker start no-preload-328069
	I1213 10:02:12.462743  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:12.496878  279351 kic.go:430] container "no-preload-328069" state is running.
	I1213 10:02:12.497965  279351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-328069
	I1213 10:02:12.519887  279351 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/config.json ...
	I1213 10:02:12.520284  279351 machine.go:94] provisionDockerMachine start ...
	I1213 10:02:12.520377  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:12.540812  279351 main.go:143] libmachine: Using SSH client type: native
	I1213 10:02:12.541137  279351 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1213 10:02:12.541152  279351 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:02:12.541877  279351 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:02:15.695176  279351 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-328069
	
	I1213 10:02:15.695202  279351 ubuntu.go:182] provisioning hostname "no-preload-328069"
	I1213 10:02:15.695302  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:15.713225  279351 main.go:143] libmachine: Using SSH client type: native
	I1213 10:02:15.713580  279351 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1213 10:02:15.713597  279351 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-328069 && echo "no-preload-328069" | sudo tee /etc/hostname
	I1213 10:02:15.876751  279351 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-328069
	
	I1213 10:02:15.876830  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:15.894850  279351 main.go:143] libmachine: Using SSH client type: native
	I1213 10:02:15.895176  279351 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1213 10:02:15.895200  279351 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-328069' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-328069/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-328069' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:02:16.048412  279351 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:02:16.048436  279351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 10:02:16.048458  279351 ubuntu.go:190] setting up certificates
	I1213 10:02:16.048468  279351 provision.go:84] configureAuth start
	I1213 10:02:16.048553  279351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-328069
	I1213 10:02:16.075718  279351 provision.go:143] copyHostCerts
	I1213 10:02:16.075798  279351 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 10:02:16.075813  279351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 10:02:16.075907  279351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 10:02:16.076022  279351 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 10:02:16.076028  279351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 10:02:16.076054  279351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 10:02:16.076133  279351 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 10:02:16.076138  279351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 10:02:16.076163  279351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 10:02:16.076218  279351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.no-preload-328069 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-328069]
	I1213 10:02:16.381103  279351 provision.go:177] copyRemoteCerts
	I1213 10:02:16.381179  279351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:02:16.381229  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.401342  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:16.507428  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:02:16.525230  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:02:16.542799  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:02:16.561062  279351 provision.go:87] duration metric: took 512.572112ms to configureAuth
	I1213 10:02:16.561095  279351 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:02:16.561318  279351 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:02:16.561332  279351 machine.go:97] duration metric: took 4.041034442s to provisionDockerMachine
	I1213 10:02:16.561341  279351 start.go:293] postStartSetup for "no-preload-328069" (driver="docker")
	I1213 10:02:16.561352  279351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:02:16.561415  279351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:02:16.561466  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.581239  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:16.687645  279351 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:02:16.691142  279351 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:02:16.691212  279351 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:02:16.691231  279351 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 10:02:16.691302  279351 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 10:02:16.691382  279351 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 10:02:16.691493  279351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:02:16.698909  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:02:16.716254  279351 start.go:296] duration metric: took 154.898803ms for postStartSetup
	I1213 10:02:16.716393  279351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:02:16.716444  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.733818  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:16.836603  279351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:02:16.841822  279351 fix.go:56] duration metric: took 4.68280802s for fixHost
	I1213 10:02:16.841848  279351 start.go:83] releasing machines lock for "no-preload-328069", held for 4.682859762s
	I1213 10:02:16.841920  279351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-328069
	I1213 10:02:16.859796  279351 ssh_runner.go:195] Run: cat /version.json
	I1213 10:02:16.859857  279351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:02:16.859863  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.859911  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.883792  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:16.886103  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:17.082036  279351 ssh_runner.go:195] Run: systemctl --version
	I1213 10:02:17.088528  279351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:02:17.092773  279351 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:02:17.092838  279351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:02:17.100613  279351 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:02:17.100639  279351 start.go:496] detecting cgroup driver to use...
	I1213 10:02:17.100671  279351 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:02:17.100716  279351 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:02:17.117849  279351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:02:17.130707  279351 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:02:17.130820  279351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:02:17.146153  279351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:02:17.159452  279351 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:02:17.271735  279351 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:02:17.386128  279351 docker.go:234] disabling docker service ...
	I1213 10:02:17.386205  279351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:02:17.401329  279351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:02:17.414137  279351 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:02:17.532620  279351 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:02:17.660743  279351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:02:17.673611  279351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:02:17.687734  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:02:17.696861  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:02:17.705596  279351 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:02:17.705702  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:02:17.714350  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:02:17.723153  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:02:17.732016  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:02:17.740626  279351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:02:17.748540  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:02:17.757314  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:02:17.766110  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:02:17.774949  279351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:02:17.782195  279351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:02:17.789627  279351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:02:17.894369  279351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:02:17.987177  279351 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:02:17.987297  279351 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:02:17.991600  279351 start.go:564] Will wait 60s for crictl version
	I1213 10:02:17.991728  279351 ssh_runner.go:195] Run: which crictl
	I1213 10:02:17.995375  279351 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:02:18.022384  279351 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:02:18.022552  279351 ssh_runner.go:195] Run: containerd --version
	I1213 10:02:18.048621  279351 ssh_runner.go:195] Run: containerd --version
	I1213 10:02:18.076009  279351 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:02:18.078918  279351 cli_runner.go:164] Run: docker network inspect no-preload-328069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:02:18.096351  279351 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 10:02:18.100312  279351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:02:18.110269  279351 kubeadm.go:884] updating cluster {Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:02:18.110401  279351 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:02:18.110451  279351 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:02:18.137499  279351 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:02:18.137523  279351 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:02:18.137531  279351 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 10:02:18.137633  279351 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-328069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:02:18.137698  279351 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:02:18.163191  279351 cni.go:84] Creating CNI manager for ""
	I1213 10:02:18.163216  279351 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:02:18.163234  279351 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:02:18.163255  279351 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-328069 NodeName:no-preload-328069 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:02:18.163402  279351 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-328069"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:02:18.163480  279351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:02:18.171245  279351 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:02:18.171338  279351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:02:18.178895  279351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:02:18.191581  279351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:02:18.209596  279351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 10:02:18.222717  279351 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:02:18.227371  279351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:02:18.237443  279351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:02:18.378945  279351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:02:18.395659  279351 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069 for IP: 192.168.76.2
	I1213 10:02:18.395721  279351 certs.go:195] generating shared ca certs ...
	I1213 10:02:18.395754  279351 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:02:18.395941  279351 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 10:02:18.396012  279351 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 10:02:18.396046  279351 certs.go:257] generating profile certs ...
	I1213 10:02:18.396189  279351 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.key
	I1213 10:02:18.396294  279351 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.key.f5afe91a
	I1213 10:02:18.396360  279351 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.key
	I1213 10:02:18.396502  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 10:02:18.396559  279351 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 10:02:18.396589  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:02:18.396649  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:02:18.396703  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:02:18.396763  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 10:02:18.396836  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:02:18.397509  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:02:18.418112  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:02:18.438679  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:02:18.457466  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:02:18.475034  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:02:18.492480  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:02:18.509931  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:02:18.526519  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:02:18.543688  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 10:02:18.560978  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 10:02:18.577824  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:02:18.595597  279351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:02:18.608319  279351 ssh_runner.go:195] Run: openssl version
	I1213 10:02:18.614518  279351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:02:18.622207  279351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:02:18.629586  279351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:02:18.633292  279351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:02:18.633355  279351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:02:18.674403  279351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:02:18.682293  279351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 10:02:18.689424  279351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 10:02:18.697040  279351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 10:02:18.700632  279351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 10:02:18.700740  279351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 10:02:18.741591  279351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:02:18.749136  279351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 10:02:18.756646  279351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 10:02:18.764252  279351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 10:02:18.768073  279351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 10:02:18.768140  279351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 10:02:18.809211  279351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:02:18.816468  279351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:02:18.820048  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:02:18.860814  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:02:18.901547  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:02:18.942314  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:02:18.983558  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:02:19.024500  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:02:19.067253  279351 kubeadm.go:401] StartCluster: {Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:02:19.067362  279351 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:02:19.067437  279351 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:02:19.094782  279351 cri.go:89] found id: ""
	I1213 10:02:19.094872  279351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:02:19.102658  279351 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:02:19.102679  279351 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:02:19.102731  279351 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:02:19.110008  279351 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:02:19.110442  279351 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-328069" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:02:19.110549  279351 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-328069" cluster setting kubeconfig missing "no-preload-328069" context setting]
	I1213 10:02:19.110833  279351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:02:19.112165  279351 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:02:19.119655  279351 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 10:02:19.119686  279351 kubeadm.go:602] duration metric: took 17.001518ms to restartPrimaryControlPlane
	I1213 10:02:19.119696  279351 kubeadm.go:403] duration metric: took 52.455088ms to StartCluster
	I1213 10:02:19.119710  279351 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:02:19.119764  279351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:02:19.120342  279351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:02:19.120541  279351 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:02:19.120828  279351 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:02:19.120875  279351 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:02:19.120946  279351 addons.go:70] Setting storage-provisioner=true in profile "no-preload-328069"
	I1213 10:02:19.120959  279351 addons.go:239] Setting addon storage-provisioner=true in "no-preload-328069"
	I1213 10:02:19.120992  279351 host.go:66] Checking if "no-preload-328069" exists ...
	I1213 10:02:19.121000  279351 addons.go:70] Setting dashboard=true in profile "no-preload-328069"
	I1213 10:02:19.121019  279351 addons.go:239] Setting addon dashboard=true in "no-preload-328069"
	W1213 10:02:19.121026  279351 addons.go:248] addon dashboard should already be in state true
	I1213 10:02:19.121047  279351 host.go:66] Checking if "no-preload-328069" exists ...
	I1213 10:02:19.121443  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:19.121464  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:19.123823  279351 addons.go:70] Setting default-storageclass=true in profile "no-preload-328069"
	I1213 10:02:19.124331  279351 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-328069"
	I1213 10:02:19.125424  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:19.125429  279351 out.go:179] * Verifying Kubernetes components...
	I1213 10:02:19.128526  279351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:02:19.159919  279351 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:02:19.162662  279351 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 10:02:19.165476  279351 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 10:02:19.165500  279351 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:02:19.165540  279351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:02:19.165616  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:19.168247  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 10:02:19.168273  279351 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 10:02:19.168347  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:19.174889  279351 addons.go:239] Setting addon default-storageclass=true in "no-preload-328069"
	I1213 10:02:19.174936  279351 host.go:66] Checking if "no-preload-328069" exists ...
	I1213 10:02:19.175371  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:19.207894  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:19.232585  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:19.238233  279351 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:02:19.238255  279351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:02:19.238316  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:19.263752  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:19.335605  279351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:02:19.413293  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:02:19.437951  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 10:02:19.437973  279351 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 10:02:19.451798  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:02:19.498903  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 10:02:19.498969  279351 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 10:02:19.535605  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 10:02:19.535632  279351 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 10:02:19.549971  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 10:02:19.549998  279351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 10:02:19.563358  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 10:02:19.563384  279351 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 10:02:19.576961  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 10:02:19.576985  279351 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 10:02:19.590019  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 10:02:19.590047  279351 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 10:02:19.603026  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 10:02:19.603101  279351 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 10:02:19.616283  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:02:19.616306  279351 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 10:02:19.629758  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:02:20.022144  279351 node_ready.go:35] waiting up to 6m0s for node "no-preload-328069" to be "Ready" ...
	W1213 10:02:20.022218  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.022247  279351 retry.go:31] will retry after 222.509243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.022338  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.022352  279351 retry.go:31] will retry after 268.916005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.022845  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.023027  279351 retry.go:31] will retry after 142.748547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.166410  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:20.226014  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.226097  279351 retry.go:31] will retry after 425.843394ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.244927  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:02:20.292349  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:20.310341  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.310377  279351 retry.go:31] will retry after 355.473376ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.349816  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.349858  279351 retry.go:31] will retry after 264.866281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.615981  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:02:20.652460  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:02:20.666962  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:20.692927  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.693006  279351 retry.go:31] will retry after 664.622012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.735811  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.735905  279351 retry.go:31] will retry after 823.814702ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.764147  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.764185  279351 retry.go:31] will retry after 778.225677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.358304  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:21.419247  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.419281  279351 retry.go:31] will retry after 462.360443ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.543454  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:02:21.560472  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:21.637848  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.637931  279351 retry.go:31] will retry after 761.466559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:21.651294  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.651336  279351 retry.go:31] will retry after 529.51866ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.882480  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:21.939004  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.939036  279351 retry.go:31] will retry after 1.587615767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:22.022643  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:22.181172  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:22.245389  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:22.245423  279351 retry.go:31] will retry after 1.713713268s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:22.399656  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:22.456680  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:22.456710  279351 retry.go:31] will retry after 1.136977531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:23.527628  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:02:23.594019  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:23.601576  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:23.601611  279351 retry.go:31] will retry after 1.62095546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:23.655668  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:23.655711  279351 retry.go:31] will retry after 2.767396253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:23.960301  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:24.023123  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:24.027493  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:24.027609  279351 retry.go:31] will retry after 2.083793774s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:25.223152  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:25.294507  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:25.294547  279351 retry.go:31] will retry after 3.357306592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:26.023508  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:26.111910  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:26.170217  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:26.170253  279351 retry.go:31] will retry after 1.692121147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:26.423771  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:26.478390  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:26.478420  279351 retry.go:31] will retry after 3.848755301s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:27.863247  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:27.922311  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:27.922347  279351 retry.go:31] will retry after 3.151041885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:28.522771  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:28.651995  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:28.709111  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:28.709149  279351 retry.go:31] will retry after 6.321683751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:30.328257  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:30.391917  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:30.391949  279351 retry.go:31] will retry after 2.426020497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:30.523587  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:31.074075  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:31.135665  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:31.135702  279351 retry.go:31] will retry after 5.370688496s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:32.818771  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:32.881303  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:32.881336  279351 retry.go:31] will retry after 6.291168603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:33.022961  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:35.031970  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:35.105661  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:35.105695  279351 retry.go:31] will retry after 7.37782956s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:35.523543  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:36.507591  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:36.594781  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:36.594821  279351 retry.go:31] will retry after 11.051382377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:37.523602  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:39.173293  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:39.235217  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:39.235250  279351 retry.go:31] will retry after 10.724210844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:40.022845  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:42.022965  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:42.483792  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:42.553607  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:42.553640  279351 retry.go:31] will retry after 7.978735352s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:44.522618  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:46.522815  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:47.647156  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:47.708591  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:47.708634  279351 retry.go:31] will retry after 13.118586966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:48.523193  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:49.959743  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:50.025078  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:50.025108  279351 retry.go:31] will retry after 20.588870551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:50.533198  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:50.605977  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:50.606015  279351 retry.go:31] will retry after 10.142953159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:51.022904  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:53.522602  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:55.522760  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:58.022714  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:00.022755  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:00.749166  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:03:00.808153  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:00.808187  279351 retry.go:31] will retry after 20.994258363s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:00.827383  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:03:00.892573  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:00.892614  279351 retry.go:31] will retry after 23.506083404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:02.022886  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:04.522818  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:07.022905  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:09.522689  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:10.615035  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:03:10.674075  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:10.674105  279351 retry.go:31] will retry after 31.171515996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:12.023028  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:14.523566  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:17.022946  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:19.522805  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:21.803099  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:03:21.862689  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:21.862723  279351 retry.go:31] will retry after 32.702784158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:22.023647  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:24.399112  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:03:24.467406  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:24.467440  279351 retry.go:31] will retry after 48.135808011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:24.523014  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:27.022918  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:29.522877  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:32.022758  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:34.023751  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:36.522647  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:38.522730  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:41.022772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:41.846416  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:03:41.903373  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:41.903405  279351 retry.go:31] will retry after 36.157114494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:43.023322  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:45.023831  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:47.522729  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:50.022691  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:52.022951  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:54.522772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:54.566096  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:03:54.623468  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:54.623599  279351 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 10:03:56.523499  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:59.022636  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:01.022702  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:03.022778  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:05.523648  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:08.022740  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:10.522719  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:12.522937  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:04:12.604177  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:04:12.663716  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:04:12.663824  279351 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 10:04:14.523618  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:17.022816  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:04:18.061133  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:04:18.126667  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:04:18.126767  279351 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:04:18.129674  279351 out.go:179] * Enabled addons: 
	I1213 10:04:18.132484  279351 addons.go:530] duration metric: took 1m59.011607468s for enable addons: enabled=[]
	W1213 10:04:19.522762  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:22.022958  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:24.023765  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:26.522595  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:28.522755  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:30.522923  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:33.022768  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:35.522646  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:37.522741  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:40.022884  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:42.023047  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:44.522737  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:46.522773  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:49.022679  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:51.022800  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:53.522674  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:55.522749  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:04:59.174754  271045 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001131308s
	I1213 10:04:59.174784  271045 kubeadm.go:319] 
	I1213 10:04:59.174866  271045 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:04:59.174909  271045 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:04:59.175039  271045 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:04:59.175055  271045 kubeadm.go:319] 
	I1213 10:04:59.175168  271045 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:04:59.175204  271045 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:04:59.175239  271045 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:04:59.175244  271045 kubeadm.go:319] 
	I1213 10:04:59.180339  271045 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:04:59.180784  271045 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:04:59.180907  271045 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:04:59.181153  271045 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:04:59.181164  271045 kubeadm.go:319] 
	I1213 10:04:59.181233  271045 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:04:59.181295  271045 kubeadm.go:403] duration metric: took 8m6.936133561s to StartCluster
	I1213 10:04:59.181332  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:04:59.181396  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:04:59.205616  271045 cri.go:89] found id: ""
	I1213 10:04:59.205641  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.205649  271045 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:04:59.205656  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:04:59.205723  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:04:59.230248  271045 cri.go:89] found id: ""
	I1213 10:04:59.230273  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.230283  271045 logs.go:284] No container was found matching "etcd"
	I1213 10:04:59.230289  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:04:59.230350  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:04:59.255440  271045 cri.go:89] found id: ""
	I1213 10:04:59.255466  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.255474  271045 logs.go:284] No container was found matching "coredns"
	I1213 10:04:59.255481  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:04:59.255559  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:04:59.279549  271045 cri.go:89] found id: ""
	I1213 10:04:59.279574  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.279583  271045 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:04:59.279590  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:04:59.279651  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:04:59.303984  271045 cri.go:89] found id: ""
	I1213 10:04:59.304010  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.304019  271045 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:04:59.304025  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:04:59.304093  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:04:59.328934  271045 cri.go:89] found id: ""
	I1213 10:04:59.328955  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.328964  271045 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:04:59.328970  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:04:59.329031  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:04:59.352693  271045 cri.go:89] found id: ""
	I1213 10:04:59.352718  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.352727  271045 logs.go:284] No container was found matching "kindnet"
	I1213 10:04:59.352737  271045 logs.go:123] Gathering logs for containerd ...
	I1213 10:04:59.352748  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:04:59.389714  271045 logs.go:123] Gathering logs for container status ...
	I1213 10:04:59.389747  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:04:59.417775  271045 logs.go:123] Gathering logs for kubelet ...
	I1213 10:04:59.417803  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:04:59.476095  271045 logs.go:123] Gathering logs for dmesg ...
	I1213 10:04:59.476128  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:04:59.492802  271045 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:04:59.492834  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:04:59.580211  271045 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:04:59.572159    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.572949    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.574547    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.574836    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.576315    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:04:59.572159    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.572949    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.574547    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.574836    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.576315    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 10:04:59.580238  271045 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001131308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:04:59.580298  271045 out.go:285] * 
	W1213 10:04:59.580386  271045 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001131308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:04:59.580407  271045 out.go:285] * 
	W1213 10:04:59.583250  271045 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:04:59.590673  271045 out.go:203] 
	W1213 10:04:59.593644  271045 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001131308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:04:59.594239  271045 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:04:59.594323  271045 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:04:59.597653  271045 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799212375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799243784Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799281225Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799299350Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799309688Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799320347Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799336388Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799349476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799366469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799408233Z" level=info msg="Connect containerd service"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799713418Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.800325698Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.815613763Z" level=info msg="Start subscribing containerd event"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.815835320Z" level=info msg="Start recovering state"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.815650752Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.816102900Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.853471915Z" level=info msg="Start event monitor"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.853663023Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.853738773Z" level=info msg="Start streaming server"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.853805555Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.853873831Z" level=info msg="runtime interface starting up..."
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.853935124Z" level=info msg="starting plugins..."
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.854001848Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 09:56:50 newest-cni-987495 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.859113307Z" level=info msg="containerd successfully booted in 0.086519s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:05:00.974921    4952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:05:00.975635    4952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:05:00.977184    4952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:05:00.977504    4952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:05:00.978950    4952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 10:05:01 up  1:47,  0 user,  load average: 0.13, 0.64, 1.42
	Linux newest-cni-987495 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:04:58 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:04:58 newest-cni-987495 kubelet[4758]: E1213 10:04:58.070518    4758 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:04:58 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:04:58 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:04:58 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 13 10:04:58 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:04:58 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:04:58 newest-cni-987495 kubelet[4763]: E1213 10:04:58.823173    4763 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:04:58 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:04:58 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:04:59 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 10:04:59 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:04:59 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:04:59 newest-cni-987495 kubelet[4847]: E1213 10:04:59.587932    4847 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:04:59 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:04:59 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:05:00 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 10:05:00 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:05:00 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:05:00 newest-cni-987495 kubelet[4869]: E1213 10:05:00.404778    4869 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:05:00 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:05:00 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:05:01 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 10:05:01 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:05:01 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495: exit status 6 (366.103381ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:05:01.578874  283603 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-987495" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-987495" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (502.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-328069 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-328069 create -f testdata/busybox.yaml: exit status 1 (56.50574ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-328069" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-328069 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-328069
helpers_test.go:244: (dbg) docker inspect no-preload-328069:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	        "Created": "2025-12-13T09:51:52.758803928Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 254898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:51:52.8299513Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hosts",
	        "LogPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f-json.log",
	        "Name": "/no-preload-328069",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-328069:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-328069",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	                "LowerDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-328069",
	                "Source": "/var/lib/docker/volumes/no-preload-328069/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-328069",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-328069",
	                "name.minikube.sigs.k8s.io": "no-preload-328069",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c2a9ce40eddef38103a6cf9a5059be6d55a21e5d26f2dcd09256f4d6e4e169b",
	            "SandboxKey": "/var/run/docker/netns/0c2a9ce40edd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-328069": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:15:2e:f9:55:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73962579d63346bb2ce9ff6da0af02a105fd1aac4b494f0d3fbfb09d208e1ce7",
	                    "EndpointID": "14441a2b315a1f21a464e01d546592920a40d2eff4ecca4a3389aa3acc59dd14",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-328069",
	                        "fe2aab224d92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069: exit status 6 (338.299507ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:00:21.002297  276405 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-328069" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-328069 logs -n 25
E1213 10:00:21.454291    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-640993                                                                                                                                                                                                                                  │ old-k8s-version-640993       │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:52 UTC │
	│ delete  │ -p kubernetes-upgrade-355809                                                                                                                                                                                                                               │ kubernetes-upgrade-355809    │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ delete  │ -p disable-driver-mounts-130854                                                                                                                                                                                                                            │ disable-driver-mounts-130854 │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ start   │ -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-238987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ stop    │ -p embed-certs-238987 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-238987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:53 UTC │
	│ image   │ embed-certs-238987 image list --format=json                                                                                                                                                                                                                │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-544967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ stop    │ -p default-k8s-diff-port-544967 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-544967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-544967 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:56:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:56:39.477521  271045 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:56:39.477696  271045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:56:39.477728  271045 out.go:374] Setting ErrFile to fd 2...
	I1213 09:56:39.477749  271045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:56:39.478026  271045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:56:39.478473  271045 out.go:368] Setting JSON to false
	I1213 09:56:39.479400  271045 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5952,"bootTime":1765613848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:56:39.479497  271045 start.go:143] virtualization:  
	I1213 09:56:39.483651  271045 out.go:179] * [newest-cni-987495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:56:39.488083  271045 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:56:39.488164  271045 notify.go:221] Checking for updates...
	I1213 09:56:39.494770  271045 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:56:39.497855  271045 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:56:39.500958  271045 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:56:39.504012  271045 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:56:39.507152  271045 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:56:39.510591  271045 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:56:39.510687  271045 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:56:39.534137  271045 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:56:39.534252  271045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:56:39.597640  271045 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:56:39.588587407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:56:39.597744  271045 docker.go:319] overlay module found
	I1213 09:56:39.602972  271045 out.go:179] * Using the docker driver based on user configuration
	I1213 09:56:39.605905  271045 start.go:309] selected driver: docker
	I1213 09:56:39.605926  271045 start.go:927] validating driver "docker" against <nil>
	I1213 09:56:39.605939  271045 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:56:39.606668  271045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:56:39.659228  271045 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:56:39.649874797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:56:39.659395  271045 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 09:56:39.659424  271045 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 09:56:39.659705  271045 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:56:39.662610  271045 out.go:179] * Using Docker driver with root privileges
	I1213 09:56:39.665424  271045 cni.go:84] Creating CNI manager for ""
	I1213 09:56:39.665484  271045 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:56:39.665497  271045 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 09:56:39.665588  271045 start.go:353] cluster config:
	{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:56:39.668716  271045 out.go:179] * Starting "newest-cni-987495" primary control-plane node in "newest-cni-987495" cluster
	I1213 09:56:39.671669  271045 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 09:56:39.674572  271045 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:56:39.677446  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:39.677492  271045 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 09:56:39.677522  271045 cache.go:65] Caching tarball of preloaded images
	I1213 09:56:39.677617  271045 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 09:56:39.677632  271045 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 09:56:39.677739  271045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 09:56:39.677763  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json: {Name:mkb4456221b0cea9f33fc0d473e380a268794011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:39.677865  271045 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:56:39.696673  271045 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:56:39.696697  271045 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:56:39.696712  271045 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:56:39.696745  271045 start.go:360] acquireMachinesLock for newest-cni-987495: {Name:mk0b05e51288e33ec02181c33b2cba54230603c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:56:39.696846  271045 start.go:364] duration metric: took 80.821µs to acquireMachinesLock for "newest-cni-987495"
	I1213 09:56:39.696875  271045 start.go:93] Provisioning new machine with config: &{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 09:56:39.696947  271045 start.go:125] createHost starting for "" (driver="docker")
	I1213 09:56:39.700273  271045 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 09:56:39.700478  271045 start.go:159] libmachine.API.Create for "newest-cni-987495" (driver="docker")
	I1213 09:56:39.700510  271045 client.go:173] LocalClient.Create starting
	I1213 09:56:39.700595  271045 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem
	I1213 09:56:39.700636  271045 main.go:143] libmachine: Decoding PEM data...
	I1213 09:56:39.700653  271045 main.go:143] libmachine: Parsing certificate...
	I1213 09:56:39.700719  271045 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem
	I1213 09:56:39.700738  271045 main.go:143] libmachine: Decoding PEM data...
	I1213 09:56:39.700753  271045 main.go:143] libmachine: Parsing certificate...
	I1213 09:56:39.701087  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 09:56:39.716190  271045 cli_runner.go:211] docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 09:56:39.716263  271045 network_create.go:284] running [docker network inspect newest-cni-987495] to gather additional debugging logs...
	I1213 09:56:39.716283  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495
	W1213 09:56:39.730822  271045 cli_runner.go:211] docker network inspect newest-cni-987495 returned with exit code 1
	I1213 09:56:39.730850  271045 network_create.go:287] error running [docker network inspect newest-cni-987495]: docker network inspect newest-cni-987495: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-987495 not found
	I1213 09:56:39.730864  271045 network_create.go:289] output of [docker network inspect newest-cni-987495]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-987495 not found
	
	** /stderr **
	I1213 09:56:39.730969  271045 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:56:39.748226  271045 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c365c32601a1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:cc:58:ff:fb:ac} reservation:<nil>}
	I1213 09:56:39.748572  271045 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5d2e6ae00d26 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:06:4d:2a:bb:ba} reservation:<nil>}
	I1213 09:56:39.748888  271045 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f516d686012e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:e6:44:cc:3a:d0} reservation:<nil>}
	I1213 09:56:39.749141  271045 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-73962579d633 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:d5:99:41:ca:7d} reservation:<nil>}
	I1213 09:56:39.749577  271045 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b7880}
	I1213 09:56:39.749602  271045 network_create.go:124] attempt to create docker network newest-cni-987495 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 09:56:39.749657  271045 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-987495 newest-cni-987495
	I1213 09:56:39.818534  271045 network_create.go:108] docker network newest-cni-987495 192.168.85.0/24 created
	I1213 09:56:39.818580  271045 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-987495" container
	I1213 09:56:39.818658  271045 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 09:56:39.841206  271045 cli_runner.go:164] Run: docker volume create newest-cni-987495 --label name.minikube.sigs.k8s.io=newest-cni-987495 --label created_by.minikube.sigs.k8s.io=true
	I1213 09:56:39.859131  271045 oci.go:103] Successfully created a docker volume newest-cni-987495
	I1213 09:56:39.859232  271045 cli_runner.go:164] Run: docker run --rm --name newest-cni-987495-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-987495 --entrypoint /usr/bin/test -v newest-cni-987495:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 09:56:40.390762  271045 oci.go:107] Successfully prepared a docker volume newest-cni-987495
	I1213 09:56:40.390831  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:40.390845  271045 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 09:56:40.390916  271045 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-987495:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 09:56:44.612485  271045 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-987495:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.221527325s)
	I1213 09:56:44.612518  271045 kic.go:203] duration metric: took 4.221669898s to extract preloaded images to volume ...
	W1213 09:56:44.612667  271045 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 09:56:44.612789  271045 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 09:56:44.665912  271045 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-987495 --name newest-cni-987495 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-987495 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-987495 --network newest-cni-987495 --ip 192.168.85.2 --volume newest-cni-987495:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 09:56:44.956868  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Running}}
	I1213 09:56:44.977125  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:44.997663  271045 cli_runner.go:164] Run: docker exec newest-cni-987495 stat /var/lib/dpkg/alternatives/iptables
	I1213 09:56:45.071335  271045 oci.go:144] the created container "newest-cni-987495" has a running status.
	I1213 09:56:45.071378  271045 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa...
	I1213 09:56:45.174388  271045 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 09:56:45.225815  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:45.265949  271045 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 09:56:45.265980  271045 kic_runner.go:114] Args: [docker exec --privileged newest-cni-987495 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 09:56:45.330610  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:45.353288  271045 machine.go:94] provisionDockerMachine start ...
	I1213 09:56:45.353380  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:45.380805  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:45.381141  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:45.381150  271045 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:56:45.381824  271045 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 09:56:48.535017  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 09:56:48.535041  271045 ubuntu.go:182] provisioning hostname "newest-cni-987495"
	I1213 09:56:48.535116  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:48.552976  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:48.553289  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:48.553308  271045 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-987495 && echo "newest-cni-987495" | sudo tee /etc/hostname
	I1213 09:56:48.715838  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 09:56:48.716003  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:48.735300  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:48.735636  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:48.735659  271045 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-987495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-987495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-987495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:56:48.887956  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:56:48.887983  271045 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 09:56:48.888015  271045 ubuntu.go:190] setting up certificates
	I1213 09:56:48.888025  271045 provision.go:84] configureAuth start
	I1213 09:56:48.888083  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:48.904758  271045 provision.go:143] copyHostCerts
	I1213 09:56:48.904824  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 09:56:48.904839  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 09:56:48.904928  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 09:56:48.905026  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 09:56:48.905037  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 09:56:48.905066  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 09:56:48.905132  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 09:56:48.905142  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 09:56:48.905168  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 09:56:48.905218  271045 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.newest-cni-987495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-987495]
	I1213 09:56:49.148109  271045 provision.go:177] copyRemoteCerts
	I1213 09:56:49.148175  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:56:49.148216  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.167297  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.275554  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:56:49.293524  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:56:49.311255  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 09:56:49.328583  271045 provision.go:87] duration metric: took 440.545309ms to configureAuth
	I1213 09:56:49.328607  271045 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:56:49.328807  271045 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:56:49.328820  271045 machine.go:97] duration metric: took 3.97550235s to provisionDockerMachine
	I1213 09:56:49.328826  271045 client.go:176] duration metric: took 9.628307523s to LocalClient.Create
	I1213 09:56:49.328840  271045 start.go:167] duration metric: took 9.628363097s to libmachine.API.Create "newest-cni-987495"
	I1213 09:56:49.328847  271045 start.go:293] postStartSetup for "newest-cni-987495" (driver="docker")
	I1213 09:56:49.328857  271045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:56:49.328908  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:56:49.328944  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.345687  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.452617  271045 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:56:49.456102  271045 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:56:49.456132  271045 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:56:49.456144  271045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 09:56:49.456197  271045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 09:56:49.456275  271045 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 09:56:49.456381  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:56:49.464374  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:56:49.482803  271045 start.go:296] duration metric: took 153.942655ms for postStartSetup
	I1213 09:56:49.483179  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:49.501288  271045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 09:56:49.501569  271045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:56:49.501608  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.519643  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.620541  271045 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:56:49.625365  271045 start.go:128] duration metric: took 9.928403278s to createHost
	I1213 09:56:49.625389  271045 start.go:83] releasing machines lock for "newest-cni-987495", held for 9.928529598s
	I1213 09:56:49.625471  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:49.645994  271045 ssh_runner.go:195] Run: cat /version.json
	I1213 09:56:49.646048  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.646301  271045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:56:49.646369  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.671756  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.687696  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.781881  271045 ssh_runner.go:195] Run: systemctl --version
	I1213 09:56:49.881841  271045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:56:49.886330  271045 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:56:49.886436  271045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:56:49.913764  271045 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 09:56:49.913793  271045 start.go:496] detecting cgroup driver to use...
	I1213 09:56:49.913826  271045 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:56:49.913873  271045 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 09:56:49.928737  271045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 09:56:49.941512  271045 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:56:49.941581  271045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:56:49.958476  271045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:56:49.976657  271045 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:56:50.092571  271045 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:56:50.215484  271045 docker.go:234] disabling docker service ...
	I1213 09:56:50.215599  271045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:56:50.236595  271045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:56:50.249894  271045 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:56:50.372863  271045 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:56:50.492030  271045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:56:50.505104  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:56:50.520463  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 09:56:50.530400  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 09:56:50.539863  271045 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 09:56:50.539979  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 09:56:50.549222  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:56:50.558350  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 09:56:50.567652  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:56:50.576927  271045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:56:50.585862  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 09:56:50.595196  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 09:56:50.604766  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 09:56:50.613925  271045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:56:50.621385  271045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:56:50.629064  271045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:56:50.735877  271045 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 09:56:50.857747  271045 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 09:56:50.857827  271045 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 09:56:50.861671  271045 start.go:564] Will wait 60s for crictl version
	I1213 09:56:50.861742  271045 ssh_runner.go:195] Run: which crictl
	I1213 09:56:50.865238  271045 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:56:50.887066  271045 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 09:56:50.887150  271045 ssh_runner.go:195] Run: containerd --version
	I1213 09:56:50.905856  271045 ssh_runner.go:195] Run: containerd --version
	I1213 09:56:50.933984  271045 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 09:56:50.936956  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:56:50.952566  271045 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 09:56:50.956629  271045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:56:50.969533  271045 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 09:56:50.972467  271045 kubeadm.go:884] updating cluster {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:56:50.972618  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:50.972704  271045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:56:50.996202  271045 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 09:56:50.996226  271045 containerd.go:534] Images already preloaded, skipping extraction
	I1213 09:56:50.996284  271045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:56:51.022962  271045 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 09:56:51.022986  271045 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:56:51.022994  271045 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 09:56:51.023092  271045 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-987495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:56:51.023168  271045 ssh_runner.go:195] Run: sudo crictl info
	I1213 09:56:51.048658  271045 cni.go:84] Creating CNI manager for ""
	I1213 09:56:51.048683  271045 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:56:51.048705  271045 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 09:56:51.048728  271045 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-987495 NodeName:newest-cni-987495 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:56:51.048850  271045 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-987495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:56:51.048925  271045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:56:51.056725  271045 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:56:51.056795  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:56:51.064442  271045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 09:56:51.077624  271045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:56:51.090906  271045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 09:56:51.103635  271045 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:56:51.107116  271045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:56:51.116647  271045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:56:51.221976  271045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:56:51.239889  271045 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495 for IP: 192.168.85.2
	I1213 09:56:51.239918  271045 certs.go:195] generating shared ca certs ...
	I1213 09:56:51.239935  271045 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.240136  271045 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 09:56:51.240196  271045 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 09:56:51.240208  271045 certs.go:257] generating profile certs ...
	I1213 09:56:51.240266  271045 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key
	I1213 09:56:51.240284  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt with IP's: []
	I1213 09:56:51.511583  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt ...
	I1213 09:56:51.511617  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt: {Name:mk5464ab31f64983cb0e8dc71ff54579969d5d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.511818  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key ...
	I1213 09:56:51.511831  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key: {Name:mke550d3f89d3ec2570e79fb5b504a6e90138b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.511927  271045 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e
	I1213 09:56:51.511944  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 09:56:51.643285  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e ...
	I1213 09:56:51.643317  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e: {Name:mk6d3f18d3edc92465fdf76beebc6a34d454297c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.644306  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e ...
	I1213 09:56:51.644326  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e: {Name:mk3fa19df9059a7cd289477f6e36bd1b8a8de61f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.644427  271045 certs.go:382] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt
	I1213 09:56:51.644510  271045 certs.go:386] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key
	I1213 09:56:51.644572  271045 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key
	I1213 09:56:51.644592  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt with IP's: []
	I1213 09:56:51.762782  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt ...
	I1213 09:56:51.762818  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt: {Name:mkc4655600dc8f487ec74e9635d5a6c0aaea04b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.763666  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key ...
	I1213 09:56:51.763686  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key: {Name:mkfc1bfb8023d67db678ef417275fa70be4e1a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.764520  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 09:56:51.764583  271045 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 09:56:51.764597  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:56:51.764630  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:56:51.764665  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:56:51.764701  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 09:56:51.764754  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:56:51.765415  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:56:51.785100  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:56:51.803674  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:56:51.820883  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 09:56:51.838678  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:56:51.855995  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:56:51.873808  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:56:51.891156  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:56:51.908530  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 09:56:51.925774  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:56:51.943306  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 09:56:51.959997  271045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:56:51.972782  271045 ssh_runner.go:195] Run: openssl version
	I1213 09:56:51.978921  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.986461  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 09:56:51.993616  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.997401  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.997462  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 09:56:52.049678  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:56:52.070937  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
	I1213 09:56:52.084538  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.094354  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 09:56:52.106583  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.110602  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.110668  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.153509  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:56:52.160769  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
	I1213 09:56:52.168129  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.175195  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:56:52.182476  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.186073  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.186133  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.226828  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:56:52.234290  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 09:56:52.241627  271045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:56:52.245109  271045 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 09:56:52.245166  271045 kubeadm.go:401] StartCluster: {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:56:52.245251  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 09:56:52.245315  271045 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:56:52.270271  271045 cri.go:89] found id: ""
	I1213 09:56:52.270344  271045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:56:52.278009  271045 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:56:52.285767  271045 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:56:52.285833  271045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:56:52.293380  271045 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:56:52.293416  271045 kubeadm.go:158] found existing configuration files:
	
	I1213 09:56:52.293469  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:56:52.301136  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:56:52.301228  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:56:52.308290  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:56:52.315693  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:56:52.315758  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:56:52.323086  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:56:52.330869  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:56:52.330965  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:56:52.338261  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:56:52.345809  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:56:52.345871  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:56:52.353258  271045 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:56:52.470124  271045 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:56:52.470684  271045 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:56:52.537914  271045 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:00:18.646921  254588 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000078394s
	I1213 10:00:18.646949  254588 kubeadm.go:319] 
	I1213 10:00:18.647006  254588 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:00:18.647040  254588 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:00:18.647145  254588 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:00:18.647149  254588 kubeadm.go:319] 
	I1213 10:00:18.647253  254588 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:00:18.647285  254588 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:00:18.647316  254588 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:00:18.647320  254588 kubeadm.go:319] 
	I1213 10:00:18.652540  254588 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:00:18.653297  254588 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:00:18.653496  254588 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:00:18.653975  254588 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 10:00:18.653988  254588 kubeadm.go:319] 
	I1213 10:00:18.654109  254588 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:00:18.654176  254588 kubeadm.go:403] duration metric: took 8m6.435468168s to StartCluster
	I1213 10:00:18.654233  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:00:18.654307  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:00:18.680404  254588 cri.go:89] found id: ""
	I1213 10:00:18.680438  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.680448  254588 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:00:18.680454  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:00:18.680527  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:00:18.704694  254588 cri.go:89] found id: ""
	I1213 10:00:18.704765  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.704788  254588 logs.go:284] No container was found matching "etcd"
	I1213 10:00:18.704803  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:00:18.704886  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:00:18.732906  254588 cri.go:89] found id: ""
	I1213 10:00:18.732932  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.732942  254588 logs.go:284] No container was found matching "coredns"
	I1213 10:00:18.732949  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:00:18.733006  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:00:18.758531  254588 cri.go:89] found id: ""
	I1213 10:00:18.758558  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.758567  254588 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:00:18.758574  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:00:18.758643  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:00:18.787111  254588 cri.go:89] found id: ""
	I1213 10:00:18.787138  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.787147  254588 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:00:18.787153  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:00:18.787211  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:00:18.814001  254588 cri.go:89] found id: ""
	I1213 10:00:18.814025  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.814034  254588 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:00:18.814041  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:00:18.814115  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:00:18.842019  254588 cri.go:89] found id: ""
	I1213 10:00:18.842046  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.842059  254588 logs.go:284] No container was found matching "kindnet"
	I1213 10:00:18.842096  254588 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:00:18.842115  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:00:18.905936  254588 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:00:18.899124    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.899561    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901056    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901383    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.902854    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:00:18.899124    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.899561    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901056    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901383    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.902854    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:00:18.905963  254588 logs.go:123] Gathering logs for containerd ...
	I1213 10:00:18.905977  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:00:18.949644  254588 logs.go:123] Gathering logs for container status ...
	I1213 10:00:18.949677  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:00:18.977252  254588 logs.go:123] Gathering logs for kubelet ...
	I1213 10:00:18.977281  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:00:19.035838  254588 logs.go:123] Gathering logs for dmesg ...
	I1213 10:00:19.035876  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 10:00:19.049572  254588 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:00:19.049623  254588 out.go:285] * 
	W1213 10:00:19.049685  254588 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:00:19.049702  254588 out.go:285] * 
	W1213 10:00:19.051871  254588 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:00:19.057079  254588 out.go:203] 
	W1213 10:00:19.061004  254588 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:00:19.061054  254588 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:00:19.061074  254588 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:00:19.064330  254588 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 09:52:02 no-preload-328069 containerd[755]: time="2025-12-13T09:52:02.879804179Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:03 no-preload-328069 containerd[755]: time="2025-12-13T09:52:03.990607336Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 13 09:52:03 no-preload-328069 containerd[755]: time="2025-12-13T09:52:03.992952819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 09:52:04 no-preload-328069 containerd[755]: time="2025-12-13T09:52:04.009273066Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:04 no-preload-328069 containerd[755]: time="2025-12-13T09:52:04.010406673Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.074822736Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.077081596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.085416708Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.087033692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.147342869Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.149762354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.157038592Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.157794989Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.618593571Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.620865199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.629284660Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.630354201Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.744735165Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.746972085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.756996214Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.757622616Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.140072906Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.142312452Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.150787462Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.151785092Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:00:21.642895    5674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:21.643624    5674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:21.645591    5674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:21.646188    5674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:21.647902    5674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 10:00:21 up  1:42,  0 user,  load average: 0.86, 1.21, 1.80
	Linux no-preload-328069 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:00:18 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:00:19 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 13 10:00:19 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:19 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:19 no-preload-328069 kubelet[5434]: E1213 10:00:19.381468    5434 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:00:19 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:00:19 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 13 10:00:20 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:20 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:20 no-preload-328069 kubelet[5530]: E1213 10:00:20.106865    5530 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 10:00:20 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:20 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:20 no-preload-328069 kubelet[5567]: E1213 10:00:20.855251    5567 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:00:21 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 10:00:21 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:21 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:21 no-preload-328069 kubelet[5665]: E1213 10:00:21.616547    5665 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:00:21 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:00:21 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069: exit status 6 (312.317432ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:00:22.064362  276642 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-328069" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-328069" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-328069
helpers_test.go:244: (dbg) docker inspect no-preload-328069:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	        "Created": "2025-12-13T09:51:52.758803928Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 254898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:51:52.8299513Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hosts",
	        "LogPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f-json.log",
	        "Name": "/no-preload-328069",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-328069:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-328069",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	                "LowerDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-328069",
	                "Source": "/var/lib/docker/volumes/no-preload-328069/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-328069",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-328069",
	                "name.minikube.sigs.k8s.io": "no-preload-328069",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c2a9ce40eddef38103a6cf9a5059be6d55a21e5d26f2dcd09256f4d6e4e169b",
	            "SandboxKey": "/var/run/docker/netns/0c2a9ce40edd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-328069": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:15:2e:f9:55:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73962579d63346bb2ce9ff6da0af02a105fd1aac4b494f0d3fbfb09d208e1ce7",
	                    "EndpointID": "14441a2b315a1f21a464e01d546592920a40d2eff4ecca4a3389aa3acc59dd14",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-328069",
	                        "fe2aab224d92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069: exit status 6 (335.261968ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:00:22.417564  276718 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-328069" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-328069 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-640993                                                                                                                                                                                                                                  │ old-k8s-version-640993       │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:52 UTC │
	│ delete  │ -p kubernetes-upgrade-355809                                                                                                                                                                                                                               │ kubernetes-upgrade-355809    │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ delete  │ -p disable-driver-mounts-130854                                                                                                                                                                                                                            │ disable-driver-mounts-130854 │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ start   │ -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-238987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ stop    │ -p embed-certs-238987 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-238987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:53 UTC │
	│ image   │ embed-certs-238987 image list --format=json                                                                                                                                                                                                                │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-544967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ stop    │ -p default-k8s-diff-port-544967 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-544967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-544967 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:56:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:56:39.477521  271045 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:56:39.477696  271045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:56:39.477728  271045 out.go:374] Setting ErrFile to fd 2...
	I1213 09:56:39.477749  271045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:56:39.478026  271045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:56:39.478473  271045 out.go:368] Setting JSON to false
	I1213 09:56:39.479400  271045 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5952,"bootTime":1765613848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:56:39.479497  271045 start.go:143] virtualization:  
	I1213 09:56:39.483651  271045 out.go:179] * [newest-cni-987495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:56:39.488083  271045 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:56:39.488164  271045 notify.go:221] Checking for updates...
	I1213 09:56:39.494770  271045 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:56:39.497855  271045 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:56:39.500958  271045 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:56:39.504012  271045 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:56:39.507152  271045 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:56:39.510591  271045 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:56:39.510687  271045 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:56:39.534137  271045 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:56:39.534252  271045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:56:39.597640  271045 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:56:39.588587407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:56:39.597744  271045 docker.go:319] overlay module found
	I1213 09:56:39.602972  271045 out.go:179] * Using the docker driver based on user configuration
	I1213 09:56:39.605905  271045 start.go:309] selected driver: docker
	I1213 09:56:39.605926  271045 start.go:927] validating driver "docker" against <nil>
	I1213 09:56:39.605939  271045 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:56:39.606668  271045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:56:39.659228  271045 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:56:39.649874797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:56:39.659395  271045 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 09:56:39.659424  271045 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 09:56:39.659705  271045 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:56:39.662610  271045 out.go:179] * Using Docker driver with root privileges
	I1213 09:56:39.665424  271045 cni.go:84] Creating CNI manager for ""
	I1213 09:56:39.665484  271045 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:56:39.665497  271045 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 09:56:39.665588  271045 start.go:353] cluster config:
	{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:56:39.668716  271045 out.go:179] * Starting "newest-cni-987495" primary control-plane node in "newest-cni-987495" cluster
	I1213 09:56:39.671669  271045 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 09:56:39.674572  271045 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:56:39.677446  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:39.677492  271045 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 09:56:39.677522  271045 cache.go:65] Caching tarball of preloaded images
	I1213 09:56:39.677617  271045 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 09:56:39.677632  271045 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 09:56:39.677739  271045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 09:56:39.677763  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json: {Name:mkb4456221b0cea9f33fc0d473e380a268794011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:39.677865  271045 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:56:39.696673  271045 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:56:39.696697  271045 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:56:39.696712  271045 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:56:39.696745  271045 start.go:360] acquireMachinesLock for newest-cni-987495: {Name:mk0b05e51288e33ec02181c33b2cba54230603c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:56:39.696846  271045 start.go:364] duration metric: took 80.821µs to acquireMachinesLock for "newest-cni-987495"
	I1213 09:56:39.696875  271045 start.go:93] Provisioning new machine with config: &{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 09:56:39.696947  271045 start.go:125] createHost starting for "" (driver="docker")
	I1213 09:56:39.700273  271045 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 09:56:39.700478  271045 start.go:159] libmachine.API.Create for "newest-cni-987495" (driver="docker")
	I1213 09:56:39.700510  271045 client.go:173] LocalClient.Create starting
	I1213 09:56:39.700595  271045 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem
	I1213 09:56:39.700636  271045 main.go:143] libmachine: Decoding PEM data...
	I1213 09:56:39.700653  271045 main.go:143] libmachine: Parsing certificate...
	I1213 09:56:39.700719  271045 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem
	I1213 09:56:39.700738  271045 main.go:143] libmachine: Decoding PEM data...
	I1213 09:56:39.700753  271045 main.go:143] libmachine: Parsing certificate...
	I1213 09:56:39.701087  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 09:56:39.716190  271045 cli_runner.go:211] docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 09:56:39.716263  271045 network_create.go:284] running [docker network inspect newest-cni-987495] to gather additional debugging logs...
	I1213 09:56:39.716283  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495
	W1213 09:56:39.730822  271045 cli_runner.go:211] docker network inspect newest-cni-987495 returned with exit code 1
	I1213 09:56:39.730850  271045 network_create.go:287] error running [docker network inspect newest-cni-987495]: docker network inspect newest-cni-987495: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-987495 not found
	I1213 09:56:39.730864  271045 network_create.go:289] output of [docker network inspect newest-cni-987495]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-987495 not found
	
	** /stderr **
	I1213 09:56:39.730969  271045 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:56:39.748226  271045 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c365c32601a1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:cc:58:ff:fb:ac} reservation:<nil>}
	I1213 09:56:39.748572  271045 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5d2e6ae00d26 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:06:4d:2a:bb:ba} reservation:<nil>}
	I1213 09:56:39.748888  271045 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f516d686012e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:e6:44:cc:3a:d0} reservation:<nil>}
	I1213 09:56:39.749141  271045 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-73962579d633 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:d5:99:41:ca:7d} reservation:<nil>}
	I1213 09:56:39.749577  271045 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b7880}
	I1213 09:56:39.749602  271045 network_create.go:124] attempt to create docker network newest-cni-987495 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 09:56:39.749657  271045 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-987495 newest-cni-987495
	I1213 09:56:39.818534  271045 network_create.go:108] docker network newest-cni-987495 192.168.85.0/24 created
	I1213 09:56:39.818580  271045 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-987495" container
	I1213 09:56:39.818658  271045 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 09:56:39.841206  271045 cli_runner.go:164] Run: docker volume create newest-cni-987495 --label name.minikube.sigs.k8s.io=newest-cni-987495 --label created_by.minikube.sigs.k8s.io=true
	I1213 09:56:39.859131  271045 oci.go:103] Successfully created a docker volume newest-cni-987495
	I1213 09:56:39.859232  271045 cli_runner.go:164] Run: docker run --rm --name newest-cni-987495-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-987495 --entrypoint /usr/bin/test -v newest-cni-987495:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 09:56:40.390762  271045 oci.go:107] Successfully prepared a docker volume newest-cni-987495
	I1213 09:56:40.390831  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:40.390845  271045 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 09:56:40.390916  271045 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-987495:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 09:56:44.612485  271045 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-987495:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.221527325s)
	I1213 09:56:44.612518  271045 kic.go:203] duration metric: took 4.221669898s to extract preloaded images to volume ...
	W1213 09:56:44.612667  271045 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 09:56:44.612789  271045 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 09:56:44.665912  271045 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-987495 --name newest-cni-987495 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-987495 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-987495 --network newest-cni-987495 --ip 192.168.85.2 --volume newest-cni-987495:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 09:56:44.956868  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Running}}
	I1213 09:56:44.977125  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:44.997663  271045 cli_runner.go:164] Run: docker exec newest-cni-987495 stat /var/lib/dpkg/alternatives/iptables
	I1213 09:56:45.071335  271045 oci.go:144] the created container "newest-cni-987495" has a running status.
	I1213 09:56:45.071378  271045 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa...
	I1213 09:56:45.174388  271045 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 09:56:45.225815  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:45.265949  271045 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 09:56:45.265980  271045 kic_runner.go:114] Args: [docker exec --privileged newest-cni-987495 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 09:56:45.330610  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:45.353288  271045 machine.go:94] provisionDockerMachine start ...
	I1213 09:56:45.353380  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:45.380805  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:45.381141  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:45.381150  271045 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:56:45.381824  271045 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 09:56:48.535017  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 09:56:48.535041  271045 ubuntu.go:182] provisioning hostname "newest-cni-987495"
	I1213 09:56:48.535116  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:48.552976  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:48.553289  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:48.553308  271045 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-987495 && echo "newest-cni-987495" | sudo tee /etc/hostname
	I1213 09:56:48.715838  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 09:56:48.716003  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:48.735300  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:48.735636  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:48.735659  271045 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-987495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-987495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-987495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:56:48.887956  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:56:48.887983  271045 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 09:56:48.888015  271045 ubuntu.go:190] setting up certificates
	I1213 09:56:48.888025  271045 provision.go:84] configureAuth start
	I1213 09:56:48.888083  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:48.904758  271045 provision.go:143] copyHostCerts
	I1213 09:56:48.904824  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 09:56:48.904839  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 09:56:48.904928  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 09:56:48.905026  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 09:56:48.905037  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 09:56:48.905066  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 09:56:48.905132  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 09:56:48.905142  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 09:56:48.905168  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 09:56:48.905218  271045 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.newest-cni-987495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-987495]
	I1213 09:56:49.148109  271045 provision.go:177] copyRemoteCerts
	I1213 09:56:49.148175  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:56:49.148216  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.167297  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.275554  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:56:49.293524  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:56:49.311255  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 09:56:49.328583  271045 provision.go:87] duration metric: took 440.545309ms to configureAuth
	I1213 09:56:49.328607  271045 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:56:49.328807  271045 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:56:49.328820  271045 machine.go:97] duration metric: took 3.97550235s to provisionDockerMachine
	I1213 09:56:49.328826  271045 client.go:176] duration metric: took 9.628307523s to LocalClient.Create
	I1213 09:56:49.328840  271045 start.go:167] duration metric: took 9.628363097s to libmachine.API.Create "newest-cni-987495"
	I1213 09:56:49.328847  271045 start.go:293] postStartSetup for "newest-cni-987495" (driver="docker")
	I1213 09:56:49.328857  271045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:56:49.328908  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:56:49.328944  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.345687  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.452617  271045 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:56:49.456102  271045 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:56:49.456132  271045 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:56:49.456144  271045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 09:56:49.456197  271045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 09:56:49.456275  271045 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 09:56:49.456381  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:56:49.464374  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:56:49.482803  271045 start.go:296] duration metric: took 153.942655ms for postStartSetup
	I1213 09:56:49.483179  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:49.501288  271045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 09:56:49.501569  271045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:56:49.501608  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.519643  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.620541  271045 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:56:49.625365  271045 start.go:128] duration metric: took 9.928403278s to createHost
	I1213 09:56:49.625389  271045 start.go:83] releasing machines lock for "newest-cni-987495", held for 9.928529598s
	I1213 09:56:49.625471  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:49.645994  271045 ssh_runner.go:195] Run: cat /version.json
	I1213 09:56:49.646048  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.646301  271045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:56:49.646369  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.671756  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.687696  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.781881  271045 ssh_runner.go:195] Run: systemctl --version
	I1213 09:56:49.881841  271045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:56:49.886330  271045 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:56:49.886436  271045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:56:49.913764  271045 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 09:56:49.913793  271045 start.go:496] detecting cgroup driver to use...
	I1213 09:56:49.913826  271045 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:56:49.913873  271045 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 09:56:49.928737  271045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 09:56:49.941512  271045 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:56:49.941581  271045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:56:49.958476  271045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:56:49.976657  271045 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:56:50.092571  271045 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:56:50.215484  271045 docker.go:234] disabling docker service ...
	I1213 09:56:50.215599  271045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:56:50.236595  271045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:56:50.249894  271045 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:56:50.372863  271045 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:56:50.492030  271045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:56:50.505104  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:56:50.520463  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 09:56:50.530400  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 09:56:50.539863  271045 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 09:56:50.539979  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 09:56:50.549222  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:56:50.558350  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 09:56:50.567652  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:56:50.576927  271045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:56:50.585862  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 09:56:50.595196  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 09:56:50.604766  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 09:56:50.613925  271045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:56:50.621385  271045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:56:50.629064  271045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:56:50.735877  271045 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 09:56:50.857747  271045 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 09:56:50.857827  271045 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 09:56:50.861671  271045 start.go:564] Will wait 60s for crictl version
	I1213 09:56:50.861742  271045 ssh_runner.go:195] Run: which crictl
	I1213 09:56:50.865238  271045 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:56:50.887066  271045 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 09:56:50.887150  271045 ssh_runner.go:195] Run: containerd --version
	I1213 09:56:50.905856  271045 ssh_runner.go:195] Run: containerd --version
	I1213 09:56:50.933984  271045 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 09:56:50.936956  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:56:50.952566  271045 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 09:56:50.956629  271045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:56:50.969533  271045 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 09:56:50.972467  271045 kubeadm.go:884] updating cluster {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:56:50.972618  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:50.972704  271045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:56:50.996202  271045 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 09:56:50.996226  271045 containerd.go:534] Images already preloaded, skipping extraction
	I1213 09:56:50.996284  271045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:56:51.022962  271045 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 09:56:51.022986  271045 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:56:51.022994  271045 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 09:56:51.023092  271045 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-987495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:56:51.023168  271045 ssh_runner.go:195] Run: sudo crictl info
	I1213 09:56:51.048658  271045 cni.go:84] Creating CNI manager for ""
	I1213 09:56:51.048683  271045 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:56:51.048705  271045 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 09:56:51.048728  271045 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-987495 NodeName:newest-cni-987495 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:56:51.048850  271045 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-987495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:56:51.048925  271045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:56:51.056725  271045 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:56:51.056795  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:56:51.064442  271045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 09:56:51.077624  271045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:56:51.090906  271045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 09:56:51.103635  271045 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:56:51.107116  271045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:56:51.116647  271045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:56:51.221976  271045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:56:51.239889  271045 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495 for IP: 192.168.85.2
	I1213 09:56:51.239918  271045 certs.go:195] generating shared ca certs ...
	I1213 09:56:51.239935  271045 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.240136  271045 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 09:56:51.240196  271045 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 09:56:51.240208  271045 certs.go:257] generating profile certs ...
	I1213 09:56:51.240266  271045 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key
	I1213 09:56:51.240284  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt with IP's: []
	I1213 09:56:51.511583  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt ...
	I1213 09:56:51.511617  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt: {Name:mk5464ab31f64983cb0e8dc71ff54579969d5d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.511818  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key ...
	I1213 09:56:51.511831  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key: {Name:mke550d3f89d3ec2570e79fb5b504a6e90138b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.511927  271045 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e
	I1213 09:56:51.511944  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 09:56:51.643285  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e ...
	I1213 09:56:51.643317  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e: {Name:mk6d3f18d3edc92465fdf76beebc6a34d454297c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.644306  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e ...
	I1213 09:56:51.644326  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e: {Name:mk3fa19df9059a7cd289477f6e36bd1b8a8de61f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.644427  271045 certs.go:382] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt
	I1213 09:56:51.644510  271045 certs.go:386] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key
	I1213 09:56:51.644572  271045 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key
	I1213 09:56:51.644592  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt with IP's: []
	I1213 09:56:51.762782  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt ...
	I1213 09:56:51.762818  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt: {Name:mkc4655600dc8f487ec74e9635d5a6c0aaea04b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.763666  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key ...
	I1213 09:56:51.763686  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key: {Name:mkfc1bfb8023d67db678ef417275fa70be4e1a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.764520  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 09:56:51.764583  271045 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 09:56:51.764597  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:56:51.764630  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:56:51.764665  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:56:51.764701  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 09:56:51.764754  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:56:51.765415  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:56:51.785100  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:56:51.803674  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:56:51.820883  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 09:56:51.838678  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:56:51.855995  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:56:51.873808  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:56:51.891156  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:56:51.908530  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 09:56:51.925774  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:56:51.943306  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 09:56:51.959997  271045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:56:51.972782  271045 ssh_runner.go:195] Run: openssl version
	I1213 09:56:51.978921  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.986461  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 09:56:51.993616  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.997401  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.997462  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 09:56:52.049678  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:56:52.070937  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
	I1213 09:56:52.084538  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.094354  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 09:56:52.106583  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.110602  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.110668  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.153509  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:56:52.160769  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
	I1213 09:56:52.168129  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.175195  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:56:52.182476  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.186073  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.186133  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.226828  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:56:52.234290  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 09:56:52.241627  271045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:56:52.245109  271045 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 09:56:52.245166  271045 kubeadm.go:401] StartCluster: {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:56:52.245251  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 09:56:52.245315  271045 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:56:52.270271  271045 cri.go:89] found id: ""
	I1213 09:56:52.270344  271045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:56:52.278009  271045 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:56:52.285767  271045 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:56:52.285833  271045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:56:52.293380  271045 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:56:52.293416  271045 kubeadm.go:158] found existing configuration files:
	
	I1213 09:56:52.293469  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:56:52.301136  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:56:52.301228  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:56:52.308290  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:56:52.315693  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:56:52.315758  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:56:52.323086  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:56:52.330869  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:56:52.330965  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:56:52.338261  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:56:52.345809  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:56:52.345871  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:56:52.353258  271045 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:56:52.470124  271045 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:56:52.470684  271045 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:56:52.537914  271045 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:00:18.646921  254588 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000078394s
	I1213 10:00:18.646949  254588 kubeadm.go:319] 
	I1213 10:00:18.647006  254588 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:00:18.647040  254588 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:00:18.647145  254588 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:00:18.647149  254588 kubeadm.go:319] 
	I1213 10:00:18.647253  254588 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:00:18.647285  254588 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:00:18.647316  254588 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:00:18.647320  254588 kubeadm.go:319] 
	I1213 10:00:18.652540  254588 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:00:18.653297  254588 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:00:18.653496  254588 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:00:18.653975  254588 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 10:00:18.653988  254588 kubeadm.go:319] 
	I1213 10:00:18.654109  254588 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:00:18.654176  254588 kubeadm.go:403] duration metric: took 8m6.435468168s to StartCluster
	I1213 10:00:18.654233  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:00:18.654307  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:00:18.680404  254588 cri.go:89] found id: ""
	I1213 10:00:18.680438  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.680448  254588 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:00:18.680454  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:00:18.680527  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:00:18.704694  254588 cri.go:89] found id: ""
	I1213 10:00:18.704765  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.704788  254588 logs.go:284] No container was found matching "etcd"
	I1213 10:00:18.704803  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:00:18.704886  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:00:18.732906  254588 cri.go:89] found id: ""
	I1213 10:00:18.732932  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.732942  254588 logs.go:284] No container was found matching "coredns"
	I1213 10:00:18.732949  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:00:18.733006  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:00:18.758531  254588 cri.go:89] found id: ""
	I1213 10:00:18.758558  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.758567  254588 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:00:18.758574  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:00:18.758643  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:00:18.787111  254588 cri.go:89] found id: ""
	I1213 10:00:18.787138  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.787147  254588 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:00:18.787153  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:00:18.787211  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:00:18.814001  254588 cri.go:89] found id: ""
	I1213 10:00:18.814025  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.814034  254588 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:00:18.814041  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:00:18.814115  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:00:18.842019  254588 cri.go:89] found id: ""
	I1213 10:00:18.842046  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.842059  254588 logs.go:284] No container was found matching "kindnet"
	I1213 10:00:18.842096  254588 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:00:18.842115  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:00:18.905936  254588 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:00:18.899124    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.899561    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901056    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901383    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.902854    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:00:18.899124    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.899561    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901056    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901383    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.902854    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:00:18.905963  254588 logs.go:123] Gathering logs for containerd ...
	I1213 10:00:18.905977  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:00:18.949644  254588 logs.go:123] Gathering logs for container status ...
	I1213 10:00:18.949677  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:00:18.977252  254588 logs.go:123] Gathering logs for kubelet ...
	I1213 10:00:18.977281  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:00:19.035838  254588 logs.go:123] Gathering logs for dmesg ...
	I1213 10:00:19.035876  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 10:00:19.049572  254588 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:00:19.049623  254588 out.go:285] * 
	W1213 10:00:19.049685  254588 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:00:19.049702  254588 out.go:285] * 
	W1213 10:00:19.051871  254588 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:00:19.057079  254588 out.go:203] 
	W1213 10:00:19.061004  254588 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:00:19.061054  254588 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:00:19.061074  254588 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:00:19.064330  254588 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 09:52:02 no-preload-328069 containerd[755]: time="2025-12-13T09:52:02.879804179Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:03 no-preload-328069 containerd[755]: time="2025-12-13T09:52:03.990607336Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 13 09:52:03 no-preload-328069 containerd[755]: time="2025-12-13T09:52:03.992952819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 09:52:04 no-preload-328069 containerd[755]: time="2025-12-13T09:52:04.009273066Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:04 no-preload-328069 containerd[755]: time="2025-12-13T09:52:04.010406673Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.074822736Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.077081596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.085416708Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.087033692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.147342869Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.149762354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.157038592Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.157794989Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.618593571Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.620865199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.629284660Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.630354201Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.744735165Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.746972085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.756996214Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.757622616Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.140072906Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.142312452Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.150787462Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.151785092Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:00:23.061445    5803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:23.062290    5803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:23.064062    5803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:23.064734    5803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:23.066584    5803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 10:00:23 up  1:42,  0 user,  load average: 1.43, 1.32, 1.84
	Linux no-preload-328069 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 13 10:00:20 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:20 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:20 no-preload-328069 kubelet[5567]: E1213 10:00:20.855251    5567 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:00:20 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:00:21 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 13 10:00:21 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:21 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:21 no-preload-328069 kubelet[5665]: E1213 10:00:21.616547    5665 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:00:21 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:00:21 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:00:22 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 13 10:00:22 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:22 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:22 no-preload-328069 kubelet[5710]: E1213 10:00:22.330565    5710 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:00:22 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:00:22 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:00:23 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 13 10:00:23 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:23 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:00:23 no-preload-328069 kubelet[5807]: E1213 10:00:23.113603    5807 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:00:23 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:00:23 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069: exit status 6 (320.604177ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:00:23.498251  276949 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-328069" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-328069" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (2.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (106.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-328069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 10:00:31.695913    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:00:52.177955    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:01:23.077507    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:01:33.139462    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:01:40.007981    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-328069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m45.439298088s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-328069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-328069 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-328069 describe deploy/metrics-server -n kube-system: exit status 1 (58.919925ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-328069" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-328069 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-328069
helpers_test.go:244: (dbg) docker inspect no-preload-328069:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	        "Created": "2025-12-13T09:51:52.758803928Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 254898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:51:52.8299513Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hosts",
	        "LogPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f-json.log",
	        "Name": "/no-preload-328069",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-328069:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-328069",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	                "LowerDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-328069",
	                "Source": "/var/lib/docker/volumes/no-preload-328069/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-328069",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-328069",
	                "name.minikube.sigs.k8s.io": "no-preload-328069",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c2a9ce40eddef38103a6cf9a5059be6d55a21e5d26f2dcd09256f4d6e4e169b",
	            "SandboxKey": "/var/run/docker/netns/0c2a9ce40edd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-328069": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:15:2e:f9:55:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73962579d63346bb2ce9ff6da0af02a105fd1aac4b494f0d3fbfb09d208e1ce7",
	                    "EndpointID": "14441a2b315a1f21a464e01d546592920a40d2eff4ecca4a3389aa3acc59dd14",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-328069",
	                        "fe2aab224d92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069: exit status 6 (323.92203ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:02:09.343092  278835 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-328069" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-328069 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:52 UTC │
	│ delete  │ -p kubernetes-upgrade-355809                                                                                                                                                                                                                               │ kubernetes-upgrade-355809    │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ delete  │ -p disable-driver-mounts-130854                                                                                                                                                                                                                            │ disable-driver-mounts-130854 │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │ 13 Dec 25 09:51 UTC │
	│ start   │ -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 09:51 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-238987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ stop    │ -p embed-certs-238987 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-238987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:53 UTC │
	│ image   │ embed-certs-238987 image list --format=json                                                                                                                                                                                                                │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-544967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ stop    │ -p default-k8s-diff-port-544967 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-544967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-544967 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-328069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:00 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:56:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:56:39.477521  271045 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:56:39.477696  271045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:56:39.477728  271045 out.go:374] Setting ErrFile to fd 2...
	I1213 09:56:39.477749  271045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:56:39.478026  271045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:56:39.478473  271045 out.go:368] Setting JSON to false
	I1213 09:56:39.479400  271045 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5952,"bootTime":1765613848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:56:39.479497  271045 start.go:143] virtualization:  
	I1213 09:56:39.483651  271045 out.go:179] * [newest-cni-987495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:56:39.488083  271045 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:56:39.488164  271045 notify.go:221] Checking for updates...
	I1213 09:56:39.494770  271045 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:56:39.497855  271045 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:56:39.500958  271045 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:56:39.504012  271045 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:56:39.507152  271045 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:56:39.510591  271045 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:56:39.510687  271045 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:56:39.534137  271045 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:56:39.534252  271045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:56:39.597640  271045 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:56:39.588587407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:56:39.597744  271045 docker.go:319] overlay module found
	I1213 09:56:39.602972  271045 out.go:179] * Using the docker driver based on user configuration
	I1213 09:56:39.605905  271045 start.go:309] selected driver: docker
	I1213 09:56:39.605926  271045 start.go:927] validating driver "docker" against <nil>
	I1213 09:56:39.605939  271045 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:56:39.606668  271045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:56:39.659228  271045 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:56:39.649874797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:56:39.659395  271045 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 09:56:39.659424  271045 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 09:56:39.659705  271045 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:56:39.662610  271045 out.go:179] * Using Docker driver with root privileges
	I1213 09:56:39.665424  271045 cni.go:84] Creating CNI manager for ""
	I1213 09:56:39.665484  271045 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:56:39.665497  271045 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 09:56:39.665588  271045 start.go:353] cluster config:
	{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:56:39.668716  271045 out.go:179] * Starting "newest-cni-987495" primary control-plane node in "newest-cni-987495" cluster
	I1213 09:56:39.671669  271045 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 09:56:39.674572  271045 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:56:39.677446  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:39.677492  271045 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 09:56:39.677522  271045 cache.go:65] Caching tarball of preloaded images
	I1213 09:56:39.677617  271045 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 09:56:39.677632  271045 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 09:56:39.677739  271045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 09:56:39.677763  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json: {Name:mkb4456221b0cea9f33fc0d473e380a268794011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:39.677865  271045 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:56:39.696673  271045 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:56:39.696697  271045 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:56:39.696712  271045 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:56:39.696745  271045 start.go:360] acquireMachinesLock for newest-cni-987495: {Name:mk0b05e51288e33ec02181c33b2cba54230603c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:56:39.696846  271045 start.go:364] duration metric: took 80.821µs to acquireMachinesLock for "newest-cni-987495"
	I1213 09:56:39.696875  271045 start.go:93] Provisioning new machine with config: &{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 09:56:39.696947  271045 start.go:125] createHost starting for "" (driver="docker")
	I1213 09:56:39.700273  271045 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 09:56:39.700478  271045 start.go:159] libmachine.API.Create for "newest-cni-987495" (driver="docker")
	I1213 09:56:39.700510  271045 client.go:173] LocalClient.Create starting
	I1213 09:56:39.700595  271045 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem
	I1213 09:56:39.700636  271045 main.go:143] libmachine: Decoding PEM data...
	I1213 09:56:39.700653  271045 main.go:143] libmachine: Parsing certificate...
	I1213 09:56:39.700719  271045 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem
	I1213 09:56:39.700738  271045 main.go:143] libmachine: Decoding PEM data...
	I1213 09:56:39.700753  271045 main.go:143] libmachine: Parsing certificate...
	I1213 09:56:39.701087  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 09:56:39.716190  271045 cli_runner.go:211] docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 09:56:39.716263  271045 network_create.go:284] running [docker network inspect newest-cni-987495] to gather additional debugging logs...
	I1213 09:56:39.716283  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495
	W1213 09:56:39.730822  271045 cli_runner.go:211] docker network inspect newest-cni-987495 returned with exit code 1
	I1213 09:56:39.730850  271045 network_create.go:287] error running [docker network inspect newest-cni-987495]: docker network inspect newest-cni-987495: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-987495 not found
	I1213 09:56:39.730864  271045 network_create.go:289] output of [docker network inspect newest-cni-987495]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-987495 not found
	
	** /stderr **
	I1213 09:56:39.730969  271045 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:56:39.748226  271045 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c365c32601a1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:cc:58:ff:fb:ac} reservation:<nil>}
	I1213 09:56:39.748572  271045 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5d2e6ae00d26 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:06:4d:2a:bb:ba} reservation:<nil>}
	I1213 09:56:39.748888  271045 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f516d686012e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:e6:44:cc:3a:d0} reservation:<nil>}
	I1213 09:56:39.749141  271045 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-73962579d633 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:d5:99:41:ca:7d} reservation:<nil>}
	I1213 09:56:39.749577  271045 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b7880}
	I1213 09:56:39.749602  271045 network_create.go:124] attempt to create docker network newest-cni-987495 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 09:56:39.749657  271045 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-987495 newest-cni-987495
	I1213 09:56:39.818534  271045 network_create.go:108] docker network newest-cni-987495 192.168.85.0/24 created
	I1213 09:56:39.818580  271045 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-987495" container
	I1213 09:56:39.818658  271045 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 09:56:39.841206  271045 cli_runner.go:164] Run: docker volume create newest-cni-987495 --label name.minikube.sigs.k8s.io=newest-cni-987495 --label created_by.minikube.sigs.k8s.io=true
	I1213 09:56:39.859131  271045 oci.go:103] Successfully created a docker volume newest-cni-987495
	I1213 09:56:39.859232  271045 cli_runner.go:164] Run: docker run --rm --name newest-cni-987495-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-987495 --entrypoint /usr/bin/test -v newest-cni-987495:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 09:56:40.390762  271045 oci.go:107] Successfully prepared a docker volume newest-cni-987495
	I1213 09:56:40.390831  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:40.390845  271045 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 09:56:40.390916  271045 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-987495:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 09:56:44.612485  271045 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-987495:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.221527325s)
	I1213 09:56:44.612518  271045 kic.go:203] duration metric: took 4.221669898s to extract preloaded images to volume ...
	W1213 09:56:44.612667  271045 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 09:56:44.612789  271045 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 09:56:44.665912  271045 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-987495 --name newest-cni-987495 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-987495 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-987495 --network newest-cni-987495 --ip 192.168.85.2 --volume newest-cni-987495:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 09:56:44.956868  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Running}}
	I1213 09:56:44.977125  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:44.997663  271045 cli_runner.go:164] Run: docker exec newest-cni-987495 stat /var/lib/dpkg/alternatives/iptables
	I1213 09:56:45.071335  271045 oci.go:144] the created container "newest-cni-987495" has a running status.
	I1213 09:56:45.071378  271045 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa...
	I1213 09:56:45.174388  271045 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 09:56:45.225815  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:45.265949  271045 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 09:56:45.265980  271045 kic_runner.go:114] Args: [docker exec --privileged newest-cni-987495 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 09:56:45.330610  271045 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 09:56:45.353288  271045 machine.go:94] provisionDockerMachine start ...
	I1213 09:56:45.353380  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:45.380805  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:45.381141  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:45.381150  271045 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:56:45.381824  271045 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 09:56:48.535017  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 09:56:48.535041  271045 ubuntu.go:182] provisioning hostname "newest-cni-987495"
	I1213 09:56:48.535116  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:48.552976  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:48.553289  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:48.553308  271045 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-987495 && echo "newest-cni-987495" | sudo tee /etc/hostname
	I1213 09:56:48.715838  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 09:56:48.716003  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:48.735300  271045 main.go:143] libmachine: Using SSH client type: native
	I1213 09:56:48.735636  271045 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1213 09:56:48.735659  271045 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-987495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-987495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-987495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:56:48.887956  271045 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:56:48.887983  271045 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 09:56:48.888015  271045 ubuntu.go:190] setting up certificates
	I1213 09:56:48.888025  271045 provision.go:84] configureAuth start
	I1213 09:56:48.888083  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:48.904758  271045 provision.go:143] copyHostCerts
	I1213 09:56:48.904824  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 09:56:48.904839  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 09:56:48.904928  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 09:56:48.905026  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 09:56:48.905037  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 09:56:48.905066  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 09:56:48.905132  271045 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 09:56:48.905142  271045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 09:56:48.905168  271045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 09:56:48.905218  271045 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.newest-cni-987495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-987495]
	I1213 09:56:49.148109  271045 provision.go:177] copyRemoteCerts
	I1213 09:56:49.148175  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:56:49.148216  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.167297  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.275554  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:56:49.293524  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:56:49.311255  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 09:56:49.328583  271045 provision.go:87] duration metric: took 440.545309ms to configureAuth
	I1213 09:56:49.328607  271045 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:56:49.328807  271045 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:56:49.328820  271045 machine.go:97] duration metric: took 3.97550235s to provisionDockerMachine
	I1213 09:56:49.328826  271045 client.go:176] duration metric: took 9.628307523s to LocalClient.Create
	I1213 09:56:49.328840  271045 start.go:167] duration metric: took 9.628363097s to libmachine.API.Create "newest-cni-987495"
	I1213 09:56:49.328847  271045 start.go:293] postStartSetup for "newest-cni-987495" (driver="docker")
	I1213 09:56:49.328857  271045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:56:49.328908  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:56:49.328944  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.345687  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.452617  271045 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:56:49.456102  271045 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:56:49.456132  271045 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:56:49.456144  271045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 09:56:49.456197  271045 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 09:56:49.456275  271045 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 09:56:49.456381  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:56:49.464374  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:56:49.482803  271045 start.go:296] duration metric: took 153.942655ms for postStartSetup
	I1213 09:56:49.483179  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:49.501288  271045 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 09:56:49.501569  271045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:56:49.501608  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.519643  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.620541  271045 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:56:49.625365  271045 start.go:128] duration metric: took 9.928403278s to createHost
	I1213 09:56:49.625389  271045 start.go:83] releasing machines lock for "newest-cni-987495", held for 9.928529598s
	I1213 09:56:49.625471  271045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 09:56:49.645994  271045 ssh_runner.go:195] Run: cat /version.json
	I1213 09:56:49.646048  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.646301  271045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:56:49.646369  271045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 09:56:49.671756  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.687696  271045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 09:56:49.781881  271045 ssh_runner.go:195] Run: systemctl --version
	I1213 09:56:49.881841  271045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:56:49.886330  271045 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:56:49.886436  271045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:56:49.913764  271045 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 09:56:49.913793  271045 start.go:496] detecting cgroup driver to use...
	I1213 09:56:49.913826  271045 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 09:56:49.913873  271045 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 09:56:49.928737  271045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 09:56:49.941512  271045 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:56:49.941581  271045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:56:49.958476  271045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:56:49.976657  271045 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:56:50.092571  271045 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:56:50.215484  271045 docker.go:234] disabling docker service ...
	I1213 09:56:50.215599  271045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:56:50.236595  271045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:56:50.249894  271045 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:56:50.372863  271045 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:56:50.492030  271045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:56:50.505104  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:56:50.520463  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 09:56:50.530400  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 09:56:50.539863  271045 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 09:56:50.539979  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 09:56:50.549222  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:56:50.558350  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 09:56:50.567652  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 09:56:50.576927  271045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:56:50.585862  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 09:56:50.595196  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 09:56:50.604766  271045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 09:56:50.613925  271045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:56:50.621385  271045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:56:50.629064  271045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:56:50.735877  271045 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 09:56:50.857747  271045 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 09:56:50.857827  271045 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 09:56:50.861671  271045 start.go:564] Will wait 60s for crictl version
	I1213 09:56:50.861742  271045 ssh_runner.go:195] Run: which crictl
	I1213 09:56:50.865238  271045 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:56:50.887066  271045 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 09:56:50.887150  271045 ssh_runner.go:195] Run: containerd --version
	I1213 09:56:50.905856  271045 ssh_runner.go:195] Run: containerd --version
	I1213 09:56:50.933984  271045 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 09:56:50.936956  271045 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:56:50.952566  271045 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 09:56:50.956629  271045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:56:50.969533  271045 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 09:56:50.972467  271045 kubeadm.go:884] updating cluster {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:56:50.972618  271045 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 09:56:50.972704  271045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:56:50.996202  271045 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 09:56:50.996226  271045 containerd.go:534] Images already preloaded, skipping extraction
	I1213 09:56:50.996284  271045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:56:51.022962  271045 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 09:56:51.022986  271045 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:56:51.022994  271045 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 09:56:51.023092  271045 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-987495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:56:51.023168  271045 ssh_runner.go:195] Run: sudo crictl info
	I1213 09:56:51.048658  271045 cni.go:84] Creating CNI manager for ""
	I1213 09:56:51.048683  271045 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 09:56:51.048705  271045 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 09:56:51.048728  271045 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-987495 NodeName:newest-cni-987495 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:56:51.048850  271045 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-987495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:56:51.048925  271045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:56:51.056725  271045 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:56:51.056795  271045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:56:51.064442  271045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 09:56:51.077624  271045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:56:51.090906  271045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 09:56:51.103635  271045 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:56:51.107116  271045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:56:51.116647  271045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:56:51.221976  271045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:56:51.239889  271045 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495 for IP: 192.168.85.2
	I1213 09:56:51.239918  271045 certs.go:195] generating shared ca certs ...
	I1213 09:56:51.239935  271045 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.240136  271045 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 09:56:51.240196  271045 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 09:56:51.240208  271045 certs.go:257] generating profile certs ...
	I1213 09:56:51.240266  271045 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key
	I1213 09:56:51.240284  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt with IP's: []
	I1213 09:56:51.511583  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt ...
	I1213 09:56:51.511617  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.crt: {Name:mk5464ab31f64983cb0e8dc71ff54579969d5d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.511818  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key ...
	I1213 09:56:51.511831  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key: {Name:mke550d3f89d3ec2570e79fb5b504a6e90138b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.511927  271045 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e
	I1213 09:56:51.511944  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 09:56:51.643285  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e ...
	I1213 09:56:51.643317  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e: {Name:mk6d3f18d3edc92465fdf76beebc6a34d454297c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.644306  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e ...
	I1213 09:56:51.644326  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e: {Name:mk3fa19df9059a7cd289477f6e36bd1b8a8de61f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.644427  271045 certs.go:382] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt.bb69770e -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt
	I1213 09:56:51.644510  271045 certs.go:386] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key
	I1213 09:56:51.644572  271045 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key
	I1213 09:56:51.644592  271045 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt with IP's: []
	I1213 09:56:51.762782  271045 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt ...
	I1213 09:56:51.762818  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt: {Name:mkc4655600dc8f487ec74e9635d5a6c0aaea04b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.763666  271045 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key ...
	I1213 09:56:51.763686  271045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key: {Name:mkfc1bfb8023d67db678ef417275fa70be4e1a3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:56:51.764520  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 09:56:51.764583  271045 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 09:56:51.764597  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:56:51.764630  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:56:51.764665  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:56:51.764701  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 09:56:51.764754  271045 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 09:56:51.765415  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:56:51.785100  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:56:51.803674  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:56:51.820883  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 09:56:51.838678  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:56:51.855995  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:56:51.873808  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:56:51.891156  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:56:51.908530  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 09:56:51.925774  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:56:51.943306  271045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 09:56:51.959997  271045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:56:51.972782  271045 ssh_runner.go:195] Run: openssl version
	I1213 09:56:51.978921  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.986461  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 09:56:51.993616  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.997401  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 09:56:51.997462  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 09:56:52.049678  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:56:52.070937  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
	I1213 09:56:52.084538  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.094354  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 09:56:52.106583  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.110602  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.110668  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 09:56:52.153509  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:56:52.160769  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
	I1213 09:56:52.168129  271045 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.175195  271045 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:56:52.182476  271045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.186073  271045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.186133  271045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:56:52.226828  271045 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:56:52.234290  271045 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 09:56:52.241627  271045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:56:52.245109  271045 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 09:56:52.245166  271045 kubeadm.go:401] StartCluster: {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:56:52.245251  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 09:56:52.245315  271045 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:56:52.270271  271045 cri.go:89] found id: ""
	I1213 09:56:52.270344  271045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:56:52.278009  271045 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:56:52.285767  271045 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:56:52.285833  271045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:56:52.293380  271045 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:56:52.293416  271045 kubeadm.go:158] found existing configuration files:
	
	I1213 09:56:52.293469  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 09:56:52.301136  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:56:52.301228  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:56:52.308290  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 09:56:52.315693  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:56:52.315758  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:56:52.323086  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 09:56:52.330869  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:56:52.330965  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:56:52.338261  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 09:56:52.345809  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:56:52.345871  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:56:52.353258  271045 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:56:52.470124  271045 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 09:56:52.470684  271045 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 09:56:52.537914  271045 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:00:18.646921  254588 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000078394s
	I1213 10:00:18.646949  254588 kubeadm.go:319] 
	I1213 10:00:18.647006  254588 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:00:18.647040  254588 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:00:18.647145  254588 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:00:18.647149  254588 kubeadm.go:319] 
	I1213 10:00:18.647253  254588 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:00:18.647285  254588 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:00:18.647316  254588 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:00:18.647320  254588 kubeadm.go:319] 
	I1213 10:00:18.652540  254588 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:00:18.653297  254588 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:00:18.653496  254588 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:00:18.653975  254588 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 10:00:18.653988  254588 kubeadm.go:319] 
	I1213 10:00:18.654109  254588 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:00:18.654176  254588 kubeadm.go:403] duration metric: took 8m6.435468168s to StartCluster
	I1213 10:00:18.654233  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:00:18.654307  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:00:18.680404  254588 cri.go:89] found id: ""
	I1213 10:00:18.680438  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.680448  254588 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:00:18.680454  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:00:18.680527  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:00:18.704694  254588 cri.go:89] found id: ""
	I1213 10:00:18.704765  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.704788  254588 logs.go:284] No container was found matching "etcd"
	I1213 10:00:18.704803  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:00:18.704886  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:00:18.732906  254588 cri.go:89] found id: ""
	I1213 10:00:18.732932  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.732942  254588 logs.go:284] No container was found matching "coredns"
	I1213 10:00:18.732949  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:00:18.733006  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:00:18.758531  254588 cri.go:89] found id: ""
	I1213 10:00:18.758558  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.758567  254588 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:00:18.758574  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:00:18.758643  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:00:18.787111  254588 cri.go:89] found id: ""
	I1213 10:00:18.787138  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.787147  254588 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:00:18.787153  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:00:18.787211  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:00:18.814001  254588 cri.go:89] found id: ""
	I1213 10:00:18.814025  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.814034  254588 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:00:18.814041  254588 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:00:18.814115  254588 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:00:18.842019  254588 cri.go:89] found id: ""
	I1213 10:00:18.842046  254588 logs.go:282] 0 containers: []
	W1213 10:00:18.842059  254588 logs.go:284] No container was found matching "kindnet"
	I1213 10:00:18.842096  254588 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:00:18.842115  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:00:18.905936  254588 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:00:18.899124    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.899561    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901056    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901383    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.902854    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:00:18.899124    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.899561    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901056    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.901383    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:00:18.902854    5410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:00:18.905963  254588 logs.go:123] Gathering logs for containerd ...
	I1213 10:00:18.905977  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:00:18.949644  254588 logs.go:123] Gathering logs for container status ...
	I1213 10:00:18.949677  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:00:18.977252  254588 logs.go:123] Gathering logs for kubelet ...
	I1213 10:00:18.977281  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:00:19.035838  254588 logs.go:123] Gathering logs for dmesg ...
	I1213 10:00:19.035876  254588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 10:00:19.049572  254588 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:00:19.049623  254588 out.go:285] * 
	W1213 10:00:19.049685  254588 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:00:19.049702  254588 out.go:285] * 
	W1213 10:00:19.051871  254588 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:00:19.057079  254588 out.go:203] 
	W1213 10:00:19.061004  254588 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000078394s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:00:19.061054  254588 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:00:19.061074  254588 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:00:19.064330  254588 out.go:203] 
	I1213 10:00:56.509890  271045 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 10:00:56.509925  271045 kubeadm.go:319] 
	I1213 10:00:56.510001  271045 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:00:56.511602  271045 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:00:56.511668  271045 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:00:56.511767  271045 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:00:56.511830  271045 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:00:56.511910  271045 kubeadm.go:319] OS: Linux
	I1213 10:00:56.511982  271045 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:00:56.512040  271045 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:00:56.512094  271045 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:00:56.512149  271045 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:00:56.512201  271045 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:00:56.512255  271045 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:00:56.512304  271045 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:00:56.512355  271045 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:00:56.512404  271045 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:00:56.512480  271045 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:00:56.512579  271045 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:00:56.512672  271045 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:00:56.512738  271045 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:00:56.517802  271045 out.go:252]   - Generating certificates and keys ...
	I1213 10:00:56.517920  271045 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:00:56.518021  271045 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:00:56.518091  271045 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:00:56.518172  271045 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:00:56.518249  271045 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:00:56.518309  271045 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:00:56.518377  271045 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:00:56.518506  271045 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-987495] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:00:56.518570  271045 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:00:56.518698  271045 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-987495] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:00:56.518773  271045 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:00:56.518859  271045 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:00:56.518935  271045 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:00:56.519032  271045 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:00:56.519114  271045 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:00:56.519194  271045 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:00:56.519269  271045 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:00:56.519370  271045 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:00:56.519457  271045 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:00:56.519579  271045 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:00:56.519671  271045 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:00:56.522631  271045 out.go:252]   - Booting up control plane ...
	I1213 10:00:56.522739  271045 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:00:56.522840  271045 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:00:56.522914  271045 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:00:56.523056  271045 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:00:56.523185  271045 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:00:56.523309  271045 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:00:56.523400  271045 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:00:56.523445  271045 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:00:56.523640  271045 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:00:56.523773  271045 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:00:56.523846  271045 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000516844s
	I1213 10:00:56.523854  271045 kubeadm.go:319] 
	I1213 10:00:56.523912  271045 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:00:56.523948  271045 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:00:56.524055  271045 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:00:56.524063  271045 kubeadm.go:319] 
	I1213 10:00:56.524166  271045 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:00:56.524206  271045 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:00:56.524241  271045 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:00:56.524270  271045 kubeadm.go:319] 
	W1213 10:00:56.524373  271045 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-987495] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-987495] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516844s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:00:56.524458  271045 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1213 10:00:56.936054  271045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:00:56.948710  271045 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:00:56.948772  271045 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:00:56.956533  271045 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:00:56.956554  271045 kubeadm.go:158] found existing configuration files:
	
	I1213 10:00:56.956624  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:00:56.964049  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:00:56.964112  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:00:56.971099  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:00:56.978635  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:00:56.978720  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:00:56.986082  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:00:56.993634  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:00:56.993701  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:00:57.003184  271045 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:00:57.013129  271045 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:00:57.013248  271045 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:00:57.021455  271045 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:00:57.062115  271045 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:00:57.062425  271045 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:00:57.136560  271045 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:00:57.136636  271045 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:00:57.136678  271045 kubeadm.go:319] OS: Linux
	I1213 10:00:57.136729  271045 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:00:57.136783  271045 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:00:57.136834  271045 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:00:57.136885  271045 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:00:57.136937  271045 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:00:57.136994  271045 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:00:57.137044  271045 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:00:57.137096  271045 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:00:57.137147  271045 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:00:57.207624  271045 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:00:57.207820  271045 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:00:57.207976  271045 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:00:57.213325  271045 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:00:57.218560  271045 out.go:252]   - Generating certificates and keys ...
	I1213 10:00:57.218672  271045 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:00:57.218785  271045 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:00:57.218899  271045 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:00:57.218984  271045 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:00:57.219077  271045 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:00:57.219151  271045 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:00:57.219232  271045 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:00:57.219337  271045 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:00:57.219441  271045 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:00:57.219573  271045 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:00:57.219786  271045 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:00:57.219856  271045 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:00:57.593590  271045 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:00:58.124861  271045 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:00:58.251326  271045 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:00:58.576584  271045 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:00:58.987419  271045 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:00:58.988170  271045 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:00:58.991572  271045 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:00:58.994699  271045 out.go:252]   - Booting up control plane ...
	I1213 10:00:58.994809  271045 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:00:58.994900  271045 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:00:58.995906  271045 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:00:59.017175  271045 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:00:59.017323  271045 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:00:59.029473  271045 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:00:59.029578  271045 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:00:59.029624  271045 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:00:59.173704  271045 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:00:59.173828  271045 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 09:52:02 no-preload-328069 containerd[755]: time="2025-12-13T09:52:02.879804179Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:03 no-preload-328069 containerd[755]: time="2025-12-13T09:52:03.990607336Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 13 09:52:03 no-preload-328069 containerd[755]: time="2025-12-13T09:52:03.992952819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 13 09:52:04 no-preload-328069 containerd[755]: time="2025-12-13T09:52:04.009273066Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:04 no-preload-328069 containerd[755]: time="2025-12-13T09:52:04.010406673Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.074822736Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.077081596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.085416708Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:05 no-preload-328069 containerd[755]: time="2025-12-13T09:52:05.087033692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.147342869Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.149762354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.157038592Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:06 no-preload-328069 containerd[755]: time="2025-12-13T09:52:06.157794989Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.618593571Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.620865199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.629284660Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:07 no-preload-328069 containerd[755]: time="2025-12-13T09:52:07.630354201Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.744735165Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.746972085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.756996214Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:08 no-preload-328069 containerd[755]: time="2025-12-13T09:52:08.757622616Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.140072906Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.142312452Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.150787462Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 13 09:52:09 no-preload-328069 containerd[755]: time="2025-12-13T09:52:09.151785092Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:02:09.972126    6843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:02:09.972820    6843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:02:09.974499    6843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:02:09.975174    6843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:02:09.976749    6843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 10:02:10 up  1:44,  0 user,  load average: 0.63, 1.06, 1.69
	Linux no-preload-328069 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:02:06 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:02:07 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 464.
	Dec 13 10:02:07 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:02:07 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:02:07 no-preload-328069 kubelet[6724]: E1213 10:02:07.320348    6724 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:02:07 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:02:07 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:02:08 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 465.
	Dec 13 10:02:08 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:02:08 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:02:08 no-preload-328069 kubelet[6729]: E1213 10:02:08.070430    6729 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:02:08 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:02:08 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:02:08 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 466.
	Dec 13 10:02:08 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:02:08 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:02:08 no-preload-328069 kubelet[6740]: E1213 10:02:08.825436    6740 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:02:08 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:02:08 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:02:09 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 467.
	Dec 13 10:02:09 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:02:09 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:02:09 no-preload-328069 kubelet[6762]: E1213 10:02:09.583928    6762 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:02:09 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:02:09 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069: exit status 6 (310.733341ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:02:10.413377  279056 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-328069" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-328069" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (106.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 10:02:14.443106    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:02:55.061842    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:03:51.888698    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:04:35.553113    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.146924044s)

                                                
                                                
-- stdout --
	* [no-preload-328069] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-328069" primary control-plane node in "no-preload-328069" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:02:11.945228  279351 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:02:11.945357  279351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:02:11.945368  279351 out.go:374] Setting ErrFile to fd 2...
	I1213 10:02:11.945373  279351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:02:11.945614  279351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 10:02:11.945995  279351 out.go:368] Setting JSON to false
	I1213 10:02:11.946845  279351 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6284,"bootTime":1765613848,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:02:11.946916  279351 start.go:143] virtualization:  
	I1213 10:02:11.952053  279351 out.go:179] * [no-preload-328069] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:02:11.955099  279351 notify.go:221] Checking for updates...
	I1213 10:02:11.955646  279351 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:02:11.958871  279351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:02:11.961865  279351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:02:11.964714  279351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 10:02:11.967733  279351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:02:11.970563  279351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:02:11.973905  279351 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:02:11.974462  279351 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:02:11.997403  279351 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:02:11.997517  279351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:02:12.056888  279351 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:02:12.046991024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:02:12.057004  279351 docker.go:319] overlay module found
	I1213 10:02:12.060124  279351 out.go:179] * Using the docker driver based on existing profile
	I1213 10:02:12.062920  279351 start.go:309] selected driver: docker
	I1213 10:02:12.062939  279351 start.go:927] validating driver "docker" against &{Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:02:12.063028  279351 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:02:12.063866  279351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:02:12.125598  279351 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:02:12.116735082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:02:12.125931  279351 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:02:12.125965  279351 cni.go:84] Creating CNI manager for ""
	I1213 10:02:12.126013  279351 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:02:12.126061  279351 start.go:353] cluster config:
	{Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:02:12.130988  279351 out.go:179] * Starting "no-preload-328069" primary control-plane node in "no-preload-328069" cluster
	I1213 10:02:12.133837  279351 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:02:12.136720  279351 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:02:12.139557  279351 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:02:12.139700  279351 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/config.json ...
	I1213 10:02:12.140016  279351 cache.go:107] acquiring lock: {Name:mk1139c6b82931eb02e4fc01be1646c4b5fb6137 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140101  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 10:02:12.140115  279351 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.272µs
	I1213 10:02:12.140129  279351 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 10:02:12.140147  279351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:02:12.140331  279351 cache.go:107] acquiring lock: {Name:mkdbfdeb98feed2961bb0c3f8a6d24ab310632c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140399  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 10:02:12.140411  279351 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 85.319µs
	I1213 10:02:12.140418  279351 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 10:02:12.140432  279351 cache.go:107] acquiring lock: {Name:mke9e3c7a7c5dbec5022163863159aa6109df603 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140467  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 10:02:12.140476  279351 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 47.475µs
	I1213 10:02:12.140483  279351 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 10:02:12.140493  279351 cache.go:107] acquiring lock: {Name:mkc53cc9694a66de0b7b66cb687f9b4074b3c86b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140525  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 10:02:12.140535  279351 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 42.659µs
	I1213 10:02:12.140542  279351 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 10:02:12.140552  279351 cache.go:107] acquiring lock: {Name:mk349a8caa03fed06b3fb3e0b39b00347dcb9b37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140580  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 10:02:12.140590  279351 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 38.45µs
	I1213 10:02:12.140596  279351 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 10:02:12.140607  279351 cache.go:107] acquiring lock: {Name:mk3eb587f4f7424524980a5884c47c318ddc6f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140639  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 10:02:12.140648  279351 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 41.723µs
	I1213 10:02:12.140653  279351 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 10:02:12.140663  279351 cache.go:107] acquiring lock: {Name:mk0e27a2c36e6dbaae7432bc4e472a6212c75814 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140693  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 10:02:12.140711  279351 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.993µs
	I1213 10:02:12.140720  279351 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 10:02:12.140730  279351 cache.go:107] acquiring lock: {Name:mk07cf085b7776efa96cbbe85a2f7495a2806d09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140801  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 10:02:12.140813  279351 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 83.981µs
	I1213 10:02:12.140820  279351 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 10:02:12.140827  279351 cache.go:87] Successfully saved all images to host disk.
	I1213 10:02:12.158842  279351 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:02:12.158865  279351 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:02:12.158888  279351 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:02:12.158915  279351 start.go:360] acquireMachinesLock for no-preload-328069: {Name:mkb27df066f9039321ce696d5a7013e52143011a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.158977  279351 start.go:364] duration metric: took 42.741µs to acquireMachinesLock for "no-preload-328069"
	I1213 10:02:12.158998  279351 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:02:12.159006  279351 fix.go:54] fixHost starting: 
	I1213 10:02:12.159253  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:12.176273  279351 fix.go:112] recreateIfNeeded on no-preload-328069: state=Stopped err=<nil>
	W1213 10:02:12.176305  279351 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:02:12.181446  279351 out.go:252] * Restarting existing docker container for "no-preload-328069" ...
	I1213 10:02:12.181532  279351 cli_runner.go:164] Run: docker start no-preload-328069
	I1213 10:02:12.462743  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:12.496878  279351 kic.go:430] container "no-preload-328069" state is running.
	I1213 10:02:12.497965  279351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-328069
	I1213 10:02:12.519887  279351 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/config.json ...
	I1213 10:02:12.520284  279351 machine.go:94] provisionDockerMachine start ...
	I1213 10:02:12.520377  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:12.540812  279351 main.go:143] libmachine: Using SSH client type: native
	I1213 10:02:12.541137  279351 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1213 10:02:12.541152  279351 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:02:12.541877  279351 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:02:15.695176  279351 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-328069
	
	I1213 10:02:15.695202  279351 ubuntu.go:182] provisioning hostname "no-preload-328069"
	I1213 10:02:15.695302  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:15.713225  279351 main.go:143] libmachine: Using SSH client type: native
	I1213 10:02:15.713580  279351 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1213 10:02:15.713597  279351 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-328069 && echo "no-preload-328069" | sudo tee /etc/hostname
	I1213 10:02:15.876751  279351 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-328069
	
	I1213 10:02:15.876830  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:15.894850  279351 main.go:143] libmachine: Using SSH client type: native
	I1213 10:02:15.895176  279351 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1213 10:02:15.895200  279351 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-328069' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-328069/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-328069' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:02:16.048412  279351 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:02:16.048436  279351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 10:02:16.048458  279351 ubuntu.go:190] setting up certificates
	I1213 10:02:16.048468  279351 provision.go:84] configureAuth start
	I1213 10:02:16.048553  279351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-328069
	I1213 10:02:16.075718  279351 provision.go:143] copyHostCerts
	I1213 10:02:16.075798  279351 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 10:02:16.075813  279351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 10:02:16.075907  279351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 10:02:16.076022  279351 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 10:02:16.076028  279351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 10:02:16.076054  279351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 10:02:16.076133  279351 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 10:02:16.076138  279351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 10:02:16.076163  279351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 10:02:16.076218  279351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.no-preload-328069 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-328069]
	I1213 10:02:16.381103  279351 provision.go:177] copyRemoteCerts
	I1213 10:02:16.381179  279351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:02:16.381229  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.401342  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:16.507428  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:02:16.525230  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:02:16.542799  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:02:16.561062  279351 provision.go:87] duration metric: took 512.572112ms to configureAuth
	I1213 10:02:16.561095  279351 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:02:16.561318  279351 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:02:16.561332  279351 machine.go:97] duration metric: took 4.041034442s to provisionDockerMachine
	I1213 10:02:16.561341  279351 start.go:293] postStartSetup for "no-preload-328069" (driver="docker")
	I1213 10:02:16.561352  279351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:02:16.561415  279351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:02:16.561466  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.581239  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:16.687645  279351 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:02:16.691142  279351 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:02:16.691212  279351 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:02:16.691231  279351 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 10:02:16.691302  279351 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 10:02:16.691382  279351 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 10:02:16.691493  279351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:02:16.698909  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:02:16.716254  279351 start.go:296] duration metric: took 154.898803ms for postStartSetup
	I1213 10:02:16.716393  279351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:02:16.716444  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.733818  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:16.836603  279351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:02:16.841822  279351 fix.go:56] duration metric: took 4.68280802s for fixHost
	I1213 10:02:16.841848  279351 start.go:83] releasing machines lock for "no-preload-328069", held for 4.682859762s
	I1213 10:02:16.841920  279351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-328069
	I1213 10:02:16.859796  279351 ssh_runner.go:195] Run: cat /version.json
	I1213 10:02:16.859857  279351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:02:16.859863  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.859911  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.883792  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:16.886103  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:17.082036  279351 ssh_runner.go:195] Run: systemctl --version
	I1213 10:02:17.088528  279351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:02:17.092773  279351 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:02:17.092838  279351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:02:17.100613  279351 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:02:17.100639  279351 start.go:496] detecting cgroup driver to use...
	I1213 10:02:17.100671  279351 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:02:17.100716  279351 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:02:17.117849  279351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:02:17.130707  279351 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:02:17.130820  279351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:02:17.146153  279351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:02:17.159452  279351 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:02:17.271735  279351 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:02:17.386128  279351 docker.go:234] disabling docker service ...
	I1213 10:02:17.386205  279351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:02:17.401329  279351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:02:17.414137  279351 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:02:17.532620  279351 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:02:17.660743  279351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:02:17.673611  279351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:02:17.687734  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:02:17.696861  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:02:17.705596  279351 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:02:17.705702  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:02:17.714350  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:02:17.723153  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:02:17.732016  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:02:17.740626  279351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:02:17.748540  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:02:17.757314  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:02:17.766110  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:02:17.774949  279351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:02:17.782195  279351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:02:17.789627  279351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:02:17.894369  279351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:02:17.987177  279351 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:02:17.987297  279351 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:02:17.991600  279351 start.go:564] Will wait 60s for crictl version
	I1213 10:02:17.991728  279351 ssh_runner.go:195] Run: which crictl
	I1213 10:02:17.995375  279351 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:02:18.022384  279351 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:02:18.022552  279351 ssh_runner.go:195] Run: containerd --version
	I1213 10:02:18.048621  279351 ssh_runner.go:195] Run: containerd --version
	I1213 10:02:18.076009  279351 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:02:18.078918  279351 cli_runner.go:164] Run: docker network inspect no-preload-328069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:02:18.096351  279351 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 10:02:18.100312  279351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:02:18.110269  279351 kubeadm.go:884] updating cluster {Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:02:18.110401  279351 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:02:18.110451  279351 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:02:18.137499  279351 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:02:18.137523  279351 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:02:18.137531  279351 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 10:02:18.137633  279351 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-328069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:02:18.137698  279351 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:02:18.163191  279351 cni.go:84] Creating CNI manager for ""
	I1213 10:02:18.163216  279351 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:02:18.163234  279351 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:02:18.163255  279351 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-328069 NodeName:no-preload-328069 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:02:18.163402  279351 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-328069"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:02:18.163480  279351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:02:18.171245  279351 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:02:18.171338  279351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:02:18.178895  279351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:02:18.191581  279351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:02:18.209596  279351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 10:02:18.222717  279351 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:02:18.227371  279351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:02:18.237443  279351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:02:18.378945  279351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:02:18.395659  279351 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069 for IP: 192.168.76.2
	I1213 10:02:18.395721  279351 certs.go:195] generating shared ca certs ...
	I1213 10:02:18.395754  279351 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:02:18.395941  279351 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 10:02:18.396012  279351 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 10:02:18.396046  279351 certs.go:257] generating profile certs ...
	I1213 10:02:18.396189  279351 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.key
	I1213 10:02:18.396294  279351 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.key.f5afe91a
	I1213 10:02:18.396360  279351 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.key
	I1213 10:02:18.396502  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 10:02:18.396559  279351 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 10:02:18.396589  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:02:18.396649  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:02:18.396703  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:02:18.396763  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 10:02:18.396836  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:02:18.397509  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:02:18.418112  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:02:18.438679  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:02:18.457466  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:02:18.475034  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:02:18.492480  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:02:18.509931  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:02:18.526519  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:02:18.543688  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 10:02:18.560978  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 10:02:18.577824  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:02:18.595597  279351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:02:18.608319  279351 ssh_runner.go:195] Run: openssl version
	I1213 10:02:18.614518  279351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:02:18.622207  279351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:02:18.629586  279351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:02:18.633292  279351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:02:18.633355  279351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:02:18.674403  279351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:02:18.682293  279351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 10:02:18.689424  279351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 10:02:18.697040  279351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 10:02:18.700632  279351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 10:02:18.700740  279351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 10:02:18.741591  279351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:02:18.749136  279351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 10:02:18.756646  279351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 10:02:18.764252  279351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 10:02:18.768073  279351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 10:02:18.768140  279351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 10:02:18.809211  279351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:02:18.816468  279351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:02:18.820048  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:02:18.860814  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:02:18.901547  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:02:18.942314  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:02:18.983558  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:02:19.024500  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:02:19.067253  279351 kubeadm.go:401] StartCluster: {Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:02:19.067362  279351 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:02:19.067437  279351 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:02:19.094782  279351 cri.go:89] found id: ""
	I1213 10:02:19.094872  279351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:02:19.102658  279351 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:02:19.102679  279351 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:02:19.102731  279351 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:02:19.110008  279351 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:02:19.110442  279351 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-328069" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:02:19.110549  279351 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-328069" cluster setting kubeconfig missing "no-preload-328069" context setting]
	I1213 10:02:19.110833  279351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:02:19.112165  279351 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:02:19.119655  279351 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 10:02:19.119686  279351 kubeadm.go:602] duration metric: took 17.001518ms to restartPrimaryControlPlane
	I1213 10:02:19.119696  279351 kubeadm.go:403] duration metric: took 52.455088ms to StartCluster
	I1213 10:02:19.119710  279351 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:02:19.119764  279351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:02:19.120342  279351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:02:19.120541  279351 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:02:19.120828  279351 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:02:19.120875  279351 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:02:19.120946  279351 addons.go:70] Setting storage-provisioner=true in profile "no-preload-328069"
	I1213 10:02:19.120959  279351 addons.go:239] Setting addon storage-provisioner=true in "no-preload-328069"
	I1213 10:02:19.120992  279351 host.go:66] Checking if "no-preload-328069" exists ...
	I1213 10:02:19.121000  279351 addons.go:70] Setting dashboard=true in profile "no-preload-328069"
	I1213 10:02:19.121019  279351 addons.go:239] Setting addon dashboard=true in "no-preload-328069"
	W1213 10:02:19.121026  279351 addons.go:248] addon dashboard should already be in state true
	I1213 10:02:19.121047  279351 host.go:66] Checking if "no-preload-328069" exists ...
	I1213 10:02:19.121443  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:19.121464  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:19.123823  279351 addons.go:70] Setting default-storageclass=true in profile "no-preload-328069"
	I1213 10:02:19.124331  279351 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-328069"
	I1213 10:02:19.125424  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:19.125429  279351 out.go:179] * Verifying Kubernetes components...
	I1213 10:02:19.128526  279351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:02:19.159919  279351 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:02:19.162662  279351 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 10:02:19.165476  279351 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 10:02:19.165500  279351 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:02:19.165540  279351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:02:19.165616  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:19.168247  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 10:02:19.168273  279351 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 10:02:19.168347  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:19.174889  279351 addons.go:239] Setting addon default-storageclass=true in "no-preload-328069"
	I1213 10:02:19.174936  279351 host.go:66] Checking if "no-preload-328069" exists ...
	I1213 10:02:19.175371  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:19.207894  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:19.232585  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:19.238233  279351 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:02:19.238255  279351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:02:19.238316  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:19.263752  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:19.335605  279351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:02:19.413293  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:02:19.437951  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 10:02:19.437973  279351 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 10:02:19.451798  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:02:19.498903  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 10:02:19.498969  279351 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 10:02:19.535605  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 10:02:19.535632  279351 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 10:02:19.549971  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 10:02:19.549998  279351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 10:02:19.563358  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 10:02:19.563384  279351 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 10:02:19.576961  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 10:02:19.576985  279351 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 10:02:19.590019  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 10:02:19.590047  279351 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 10:02:19.603026  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 10:02:19.603101  279351 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 10:02:19.616283  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:02:19.616306  279351 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 10:02:19.629758  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:02:20.022144  279351 node_ready.go:35] waiting up to 6m0s for node "no-preload-328069" to be "Ready" ...
	W1213 10:02:20.022218  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.022247  279351 retry.go:31] will retry after 222.509243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.022338  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.022352  279351 retry.go:31] will retry after 268.916005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.022845  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.023027  279351 retry.go:31] will retry after 142.748547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.166410  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:20.226014  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.226097  279351 retry.go:31] will retry after 425.843394ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.244927  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:02:20.292349  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:20.310341  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.310377  279351 retry.go:31] will retry after 355.473376ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.349816  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.349858  279351 retry.go:31] will retry after 264.866281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.615981  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:02:20.652460  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:02:20.666962  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:20.692927  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.693006  279351 retry.go:31] will retry after 664.622012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.735811  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.735905  279351 retry.go:31] will retry after 823.814702ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.764147  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.764185  279351 retry.go:31] will retry after 778.225677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.358304  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:21.419247  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.419281  279351 retry.go:31] will retry after 462.360443ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.543454  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:02:21.560472  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:21.637848  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.637931  279351 retry.go:31] will retry after 761.466559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:21.651294  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.651336  279351 retry.go:31] will retry after 529.51866ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.882480  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:21.939004  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.939036  279351 retry.go:31] will retry after 1.587615767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:22.022643  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:22.181172  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:22.245389  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:22.245423  279351 retry.go:31] will retry after 1.713713268s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:22.399656  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:22.456680  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:22.456710  279351 retry.go:31] will retry after 1.136977531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:23.527628  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:02:23.594019  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:23.601576  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:23.601611  279351 retry.go:31] will retry after 1.62095546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:23.655668  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:23.655711  279351 retry.go:31] will retry after 2.767396253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:23.960301  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:24.023123  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:24.027493  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:24.027609  279351 retry.go:31] will retry after 2.083793774s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:25.223152  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:25.294507  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:25.294547  279351 retry.go:31] will retry after 3.357306592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:26.023508  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:26.111910  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:26.170217  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:26.170253  279351 retry.go:31] will retry after 1.692121147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:26.423771  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:26.478390  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:26.478420  279351 retry.go:31] will retry after 3.848755301s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:27.863247  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:27.922311  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:27.922347  279351 retry.go:31] will retry after 3.151041885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:28.522771  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:28.651995  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:28.709111  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:28.709149  279351 retry.go:31] will retry after 6.321683751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:30.328257  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:30.391917  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:30.391949  279351 retry.go:31] will retry after 2.426020497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:30.523587  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:31.074075  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:31.135665  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:31.135702  279351 retry.go:31] will retry after 5.370688496s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:32.818771  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:32.881303  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:32.881336  279351 retry.go:31] will retry after 6.291168603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:33.022961  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:35.031970  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:35.105661  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:35.105695  279351 retry.go:31] will retry after 7.37782956s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:35.523543  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:36.507591  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:36.594781  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:36.594821  279351 retry.go:31] will retry after 11.051382377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:37.523602  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:39.173293  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:39.235217  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:39.235250  279351 retry.go:31] will retry after 10.724210844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:40.022845  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:42.022965  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:42.483792  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:42.553607  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:42.553640  279351 retry.go:31] will retry after 7.978735352s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:44.522618  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:46.522815  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:47.647156  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:47.708591  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:47.708634  279351 retry.go:31] will retry after 13.118586966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:48.523193  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:49.959743  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:50.025078  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:50.025108  279351 retry.go:31] will retry after 20.588870551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:50.533198  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:50.605977  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:50.606015  279351 retry.go:31] will retry after 10.142953159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:51.022904  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:53.522602  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:55.522760  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:58.022714  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:00.022755  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:00.749166  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:03:00.808153  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:00.808187  279351 retry.go:31] will retry after 20.994258363s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:00.827383  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:03:00.892573  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:00.892614  279351 retry.go:31] will retry after 23.506083404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:02.022886  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:04.522818  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:07.022905  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:09.522689  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:10.615035  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:03:10.674075  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:10.674105  279351 retry.go:31] will retry after 31.171515996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:12.023028  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:14.523566  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:17.022946  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:19.522805  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:21.803099  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:03:21.862689  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:21.862723  279351 retry.go:31] will retry after 32.702784158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:22.023647  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:24.399112  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:03:24.467406  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:24.467440  279351 retry.go:31] will retry after 48.135808011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:24.523014  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:27.022918  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:29.522877  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:32.022758  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:34.023751  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:36.522647  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:38.522730  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:41.022772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:41.846416  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:03:41.903373  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:41.903405  279351 retry.go:31] will retry after 36.157114494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:43.023322  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:45.023831  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:47.522729  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:50.022691  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:52.022951  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:54.522772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:54.566096  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:03:54.623468  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:54.623599  279351 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 10:03:56.523499  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:59.022636  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:01.022702  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:03.022778  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:05.523648  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:08.022740  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:10.522719  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:12.522937  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:04:12.604177  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:04:12.663716  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:04:12.663824  279351 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 10:04:14.523618  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:17.022816  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:04:18.061133  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:04:18.126667  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:04:18.126767  279351 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:04:18.129674  279351 out.go:179] * Enabled addons: 
	I1213 10:04:18.132484  279351 addons.go:530] duration metric: took 1m59.011607468s for enable addons: enabled=[]
	W1213 10:04:19.522762  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:22.022958  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:24.023765  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:26.522595  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:28.522755  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:30.522923  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:33.022768  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:35.522646  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:37.522741  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:40.022884  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:42.023047  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:44.522737  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:46.522773  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:49.022679  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:51.022800  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:53.522674  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:55.522749  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:58.022655  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:00.048727  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:02.522644  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:04.522700  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:07.022922  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:09.023757  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:11.523627  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:14.022655  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:16.523725  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:19.022668  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:21.022772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:23.522679  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:25.522731  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:27.522882  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:30.022777  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:32.522694  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:35.022664  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:37.023237  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:39.023460  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:41.522701  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:43.522742  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:46.022766  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:48.522616  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:50.522693  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:52.522740  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:55.022636  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:57.022911  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:59.522730  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:01.523683  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:04.023728  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:06.523424  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:09.022750  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:11.522676  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:13.522717  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:15.522993  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:18.022724  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:20.022887  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:22.022999  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:24.523627  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:27.022760  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:29.522689  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:31.523402  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:34.022707  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:36.022804  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:38.522711  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:40.523408  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:43.023116  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:45.025351  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:47.522700  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:49.522769  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:51.523631  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:54.023458  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:56.023636  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:58.523051  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:01.022919  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:03.522677  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:05.522772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:08.022655  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:10.022768  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:12.022989  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:14.522670  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:17.022862  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:19.522763  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:21.522806  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:24.022701  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:26.023564  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:28.522731  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:30.522938  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:33.022800  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:35.522770  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:38.022809  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:40.522836  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:42.523034  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:45.022823  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:47.022949  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:49.522878  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:52.022714  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:54.023780  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:56.522671  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:58.522782  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:00.523382  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:03.022774  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:05.522704  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:08.022653  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:10.022701  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:12.022956  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:14.522703  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:17.022884  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:19.522795  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:20.031934  279351 node_ready.go:38] duration metric: took 6m0.009733727s for node "no-preload-328069" to be "Ready" ...
	I1213 10:08:20.035146  279351 out.go:203] 
	W1213 10:08:20.038039  279351 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:08:20.038064  279351 out.go:285] * 
	* 
	W1213 10:08:20.040199  279351 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:08:20.043110  279351 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-328069
helpers_test.go:244: (dbg) docker inspect no-preload-328069:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	        "Created": "2025-12-13T09:51:52.758803928Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 279480,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:02:12.212548985Z",
	            "FinishedAt": "2025-12-13T10:02:10.889738311Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hosts",
	        "LogPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f-json.log",
	        "Name": "/no-preload-328069",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-328069:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-328069",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	                "LowerDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-328069",
	                "Source": "/var/lib/docker/volumes/no-preload-328069/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-328069",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-328069",
	                "name.minikube.sigs.k8s.io": "no-preload-328069",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e549dceaa2628f46a792f0513237bae1c9187e2280b148782465d5223dc837ce",
	            "SandboxKey": "/var/run/docker/netns/e549dceaa262",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-328069": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:94:67:0e:78:62",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73962579d63346bb2ce9ff6da0af02a105fd1aac4b494f0d3fbfb09d208e1ce7",
	                    "EndpointID": "1f33b140f1554f462bc470ee8cae381e2b3ff6375e4e1f2dfdc3776ccc0d5791",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-328069",
	                        "fe2aab224d92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069: exit status 2 (305.537369ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-328069 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:53 UTC │
	│ image   │ embed-certs-238987 image list --format=json                                                                                                                                                                                                                │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-544967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ stop    │ -p default-k8s-diff-port-544967 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-544967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-544967 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-328069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:00 UTC │                     │
	│ stop    │ -p no-preload-328069 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │ 13 Dec 25 10:02 UTC │
	│ addons  │ enable dashboard -p no-preload-328069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │ 13 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-987495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:05 UTC │                     │
	│ stop    │ -p newest-cni-987495 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:06 UTC │ 13 Dec 25 10:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-987495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:06 UTC │ 13 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:06 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:06:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:06:44.358606  285837 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:06:44.358774  285837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:06:44.358804  285837 out.go:374] Setting ErrFile to fd 2...
	I1213 10:06:44.358810  285837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:06:44.359110  285837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 10:06:44.359584  285837 out.go:368] Setting JSON to false
	I1213 10:06:44.360505  285837 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6557,"bootTime":1765613848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:06:44.360574  285837 start.go:143] virtualization:  
	I1213 10:06:44.365480  285837 out.go:179] * [newest-cni-987495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:06:44.368718  285837 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:06:44.368777  285837 notify.go:221] Checking for updates...
	I1213 10:06:44.374649  285837 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:06:44.377632  285837 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:44.380625  285837 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 10:06:44.383607  285837 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:06:44.386498  285837 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:06:44.389949  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:44.390563  285837 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:06:44.426169  285837 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:06:44.426412  285837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:06:44.479541  285837 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:06:44.469338758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:06:44.479654  285837 docker.go:319] overlay module found
	I1213 10:06:44.482815  285837 out.go:179] * Using the docker driver based on existing profile
	I1213 10:06:44.485692  285837 start.go:309] selected driver: docker
	I1213 10:06:44.485711  285837 start.go:927] validating driver "docker" against &{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:44.485823  285837 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:06:44.486552  285837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:06:44.545256  285837 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:06:44.535101087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:06:44.545615  285837 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 10:06:44.545650  285837 cni.go:84] Creating CNI manager for ""
	I1213 10:06:44.545706  285837 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:06:44.545747  285837 start.go:353] cluster config:
	{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:44.548958  285837 out.go:179] * Starting "newest-cni-987495" primary control-plane node in "newest-cni-987495" cluster
	I1213 10:06:44.551733  285837 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:06:44.554789  285837 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:06:44.557547  285837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:06:44.557592  285837 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:06:44.557602  285837 cache.go:65] Caching tarball of preloaded images
	I1213 10:06:44.557636  285837 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:06:44.557693  285837 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:06:44.557703  285837 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:06:44.557824  285837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 10:06:44.577619  285837 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:06:44.577644  285837 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:06:44.577660  285837 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:06:44.577696  285837 start.go:360] acquireMachinesLock for newest-cni-987495: {Name:mk0b05e51288e33ec02181c33b2cba54230603c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:06:44.577756  285837 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "newest-cni-987495"
	I1213 10:06:44.577778  285837 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:06:44.577787  285837 fix.go:54] fixHost starting: 
	I1213 10:06:44.578057  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:44.595484  285837 fix.go:112] recreateIfNeeded on newest-cni-987495: state=Stopped err=<nil>
	W1213 10:06:44.595545  285837 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 10:06:43.023116  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:45.025351  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:44.598729  285837 out.go:252] * Restarting existing docker container for "newest-cni-987495" ...
	I1213 10:06:44.598811  285837 cli_runner.go:164] Run: docker start newest-cni-987495
	I1213 10:06:44.855461  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:44.880412  285837 kic.go:430] container "newest-cni-987495" state is running.
	I1213 10:06:44.880797  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:44.909497  285837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 10:06:44.909726  285837 machine.go:94] provisionDockerMachine start ...
	I1213 10:06:44.909783  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:44.930622  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:44.931232  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:44.931291  285837 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:06:44.932041  285837 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:06:48.091507  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 10:06:48.091560  285837 ubuntu.go:182] provisioning hostname "newest-cni-987495"
	I1213 10:06:48.091625  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.110757  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:48.111074  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:48.111090  285837 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-987495 && echo "newest-cni-987495" | sudo tee /etc/hostname
	I1213 10:06:48.273955  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 10:06:48.274083  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.291615  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:48.291933  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:48.291961  285837 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-987495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-987495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-987495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:06:48.443806  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:06:48.443836  285837 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 10:06:48.443909  285837 ubuntu.go:190] setting up certificates
	I1213 10:06:48.443925  285837 provision.go:84] configureAuth start
	I1213 10:06:48.444014  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:48.461447  285837 provision.go:143] copyHostCerts
	I1213 10:06:48.461529  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 10:06:48.461544  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 10:06:48.461626  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 10:06:48.461731  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 10:06:48.461744  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 10:06:48.461773  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 10:06:48.461831  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 10:06:48.461840  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 10:06:48.461873  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 10:06:48.461929  285837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.newest-cni-987495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-987495]
	I1213 10:06:48.588588  285837 provision.go:177] copyRemoteCerts
	I1213 10:06:48.588677  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:06:48.588742  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.606370  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:48.711093  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:06:48.728291  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:06:48.746238  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:06:48.763841  285837 provision.go:87] duration metric: took 319.890818ms to configureAuth
	I1213 10:06:48.763919  285837 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:06:48.764158  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:48.764172  285837 machine.go:97] duration metric: took 3.854438499s to provisionDockerMachine
	I1213 10:06:48.764181  285837 start.go:293] postStartSetup for "newest-cni-987495" (driver="docker")
	I1213 10:06:48.764199  285837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:06:48.764250  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:06:48.764297  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.781656  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:48.887571  285837 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:06:48.891032  285837 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:06:48.891062  285837 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:06:48.891074  285837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 10:06:48.891128  285837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 10:06:48.891231  285837 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 10:06:48.891336  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:06:48.898692  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:06:48.916401  285837 start.go:296] duration metric: took 152.205033ms for postStartSetup
	I1213 10:06:48.916505  285837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:06:48.916556  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.933960  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.036570  285837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:06:49.041484  285837 fix.go:56] duration metric: took 4.463690867s for fixHost
	I1213 10:06:49.041511  285837 start.go:83] releasing machines lock for "newest-cni-987495", held for 4.463742733s
	I1213 10:06:49.041581  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:49.058404  285837 ssh_runner.go:195] Run: cat /version.json
	I1213 10:06:49.058462  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:49.058542  285837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:06:49.058607  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:49.080342  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.081196  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.272327  285837 ssh_runner.go:195] Run: systemctl --version
	I1213 10:06:49.280206  285837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:06:49.285584  285837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:06:49.285649  285837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:06:49.294944  285837 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:06:49.295018  285837 start.go:496] detecting cgroup driver to use...
	I1213 10:06:49.295073  285837 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:06:49.295155  285837 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:06:49.313555  285837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:06:49.330142  285837 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:06:49.330250  285837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:06:49.347394  285837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:06:49.361017  285837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:06:49.470304  285837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:06:49.578011  285837 docker.go:234] disabling docker service ...
	I1213 10:06:49.578102  285837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:06:49.592856  285837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:06:49.605575  285837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:06:49.713643  285837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:06:49.824293  285837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:06:49.838298  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:06:49.852989  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:06:49.861909  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:06:49.870661  285837 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:06:49.870784  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:06:49.879670  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:06:49.888429  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:06:49.896909  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:06:49.905618  285837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:06:49.913163  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:06:49.921632  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:06:49.930294  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:06:49.939291  285837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:06:49.947067  285837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:06:49.954313  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:50.072981  285837 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:06:50.196904  285837 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:06:50.196994  285837 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:06:50.200903  285837 start.go:564] Will wait 60s for crictl version
	I1213 10:06:50.201048  285837 ssh_runner.go:195] Run: which crictl
	I1213 10:06:50.204672  285837 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:06:50.230484  285837 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:06:50.230603  285837 ssh_runner.go:195] Run: containerd --version
	I1213 10:06:50.250716  285837 ssh_runner.go:195] Run: containerd --version
	I1213 10:06:50.275578  285837 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:06:50.278424  285837 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:06:50.294657  285837 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 10:06:50.298351  285837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:06:50.310828  285837 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 10:06:50.313572  285837 kubeadm.go:884] updating cluster {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:06:50.313727  285837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:06:50.313810  285837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:06:50.342567  285837 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:06:50.342593  285837 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:06:50.342654  285837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:06:50.371166  285837 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:06:50.371189  285837 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:06:50.371197  285837 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 10:06:50.371299  285837 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-987495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:06:50.371378  285837 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:06:50.396100  285837 cni.go:84] Creating CNI manager for ""
	I1213 10:06:50.396123  285837 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:06:50.396165  285837 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 10:06:50.396196  285837 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-987495 NodeName:newest-cni-987495 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:06:50.396373  285837 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-987495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:06:50.396459  285837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:06:50.404329  285837 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:06:50.404398  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:06:50.411842  285837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:06:50.424649  285837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:06:50.442140  285837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 10:06:50.455154  285837 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:06:50.459006  285837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:06:50.468675  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:50.580293  285837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:06:50.596864  285837 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495 for IP: 192.168.85.2
	I1213 10:06:50.596887  285837 certs.go:195] generating shared ca certs ...
	I1213 10:06:50.596905  285837 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:50.597091  285837 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 10:06:50.597205  285837 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 10:06:50.597223  285837 certs.go:257] generating profile certs ...
	I1213 10:06:50.597356  285837 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key
	I1213 10:06:50.597436  285837 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e
	I1213 10:06:50.597506  285837 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key
	I1213 10:06:50.597658  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 10:06:50.597722  285837 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 10:06:50.597739  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:06:50.597785  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:06:50.597830  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:06:50.597864  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 10:06:50.597929  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:06:50.598639  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:06:50.618438  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:06:50.636641  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:06:50.654754  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:06:50.674470  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:06:50.692387  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:06:50.709515  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:06:50.726691  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:06:50.744316  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 10:06:50.762153  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:06:50.779459  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 10:06:50.799850  285837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:06:50.814739  285837 ssh_runner.go:195] Run: openssl version
	I1213 10:06:50.821667  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.831484  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 10:06:50.840240  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.844034  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.844100  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.885521  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:06:50.892992  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.900259  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:06:50.907747  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.911335  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.911425  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.952315  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:06:50.959952  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.967099  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 10:06:50.974300  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.977776  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.977836  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 10:06:51.019185  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:06:51.026990  285837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:06:51.031010  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:06:51.084662  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:06:51.132673  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:06:51.177864  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:06:51.221006  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:06:51.268266  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:06:51.309760  285837 kubeadm.go:401] StartCluster: {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:51.309854  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:06:51.309920  285837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:06:51.336480  285837 cri.go:89] found id: ""
	I1213 10:06:51.336643  285837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:06:51.344873  285837 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:06:51.344892  285837 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:06:51.344971  285837 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:06:51.352443  285837 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:06:51.353090  285837 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-987495" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:51.353376  285837 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-987495" cluster setting kubeconfig missing "newest-cni-987495" context setting]
	I1213 10:06:51.353816  285837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.355217  285837 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:06:51.362937  285837 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 10:06:51.363006  285837 kubeadm.go:602] duration metric: took 18.107502ms to restartPrimaryControlPlane
	I1213 10:06:51.363022  285837 kubeadm.go:403] duration metric: took 53.271819ms to StartCluster
	I1213 10:06:51.363041  285837 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.363105  285837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:51.363987  285837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.364220  285837 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:06:51.364499  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:51.364635  285837 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:06:51.364717  285837 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-987495"
	I1213 10:06:51.364742  285837 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-987495"
	I1213 10:06:51.364767  285837 addons.go:70] Setting default-storageclass=true in profile "newest-cni-987495"
	I1213 10:06:51.364819  285837 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-987495"
	I1213 10:06:51.364774  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.365187  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.365396  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.364741  285837 addons.go:70] Setting dashboard=true in profile "newest-cni-987495"
	I1213 10:06:51.365978  285837 addons.go:239] Setting addon dashboard=true in "newest-cni-987495"
	W1213 10:06:51.365987  285837 addons.go:248] addon dashboard should already be in state true
	I1213 10:06:51.366008  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.366429  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.370287  285837 out.go:179] * Verifying Kubernetes components...
	I1213 10:06:51.373474  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:51.400526  285837 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 10:06:51.404501  285837 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 10:06:51.407418  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 10:06:51.407443  285837 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 10:06:51.407622  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.417800  285837 addons.go:239] Setting addon default-storageclass=true in "newest-cni-987495"
	I1213 10:06:51.417844  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.418251  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.419100  285837 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1213 10:06:47.522700  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:49.522769  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:51.523631  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:51.423855  285837 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:51.423880  285837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:06:51.423942  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.466299  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.483641  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.486041  285837 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:51.486059  285837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:06:51.486115  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.509387  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.646942  285837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:06:51.680839  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 10:06:51.680862  285837 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 10:06:51.697914  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 10:06:51.697938  285837 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 10:06:51.704518  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:51.713551  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:51.723021  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 10:06:51.723048  285837 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 10:06:51.778125  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 10:06:51.778149  285837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 10:06:51.806697  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 10:06:51.806719  285837 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 10:06:51.819170  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 10:06:51.819253  285837 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 10:06:51.832331  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 10:06:51.832355  285837 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 10:06:51.845336  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 10:06:51.845362  285837 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 10:06:51.859132  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:06:51.859155  285837 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 10:06:51.872954  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:06:52.275964  285837 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:06:52.276037  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:52.276137  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276165  285837 retry.go:31] will retry after 226.70351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.276226  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276237  285837 retry.go:31] will retry after 265.695109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.276427  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276440  285837 retry.go:31] will retry after 287.765057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.503091  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:52.542820  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:52.565377  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:52.583674  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.583713  285837 retry.go:31] will retry after 384.757306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.624746  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.624777  285837 retry.go:31] will retry after 404.862658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.656044  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.656099  285837 retry.go:31] will retry after 520.967054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.776249  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:52.969189  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:53.030822  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:53.051878  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.051909  285837 retry.go:31] will retry after 644.635232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:53.146104  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.146138  285837 retry.go:31] will retry after 713.617137ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.177278  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:53.244074  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.244105  285837 retry.go:31] will retry after 478.208285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.276451  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:53.697474  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:53.722935  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:53.763188  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.763282  285837 retry.go:31] will retry after 791.669242ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.776509  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:53.833584  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.833619  285837 retry.go:31] will retry after 1.106769375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.860665  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:53.922352  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.922382  285837 retry.go:31] will retry after 439.211444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.277094  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:54.023458  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:56.023636  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:54.362407  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:54.425741  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.425772  285837 retry.go:31] will retry after 994.413015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.555979  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:06:54.643378  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.643410  285837 retry.go:31] will retry after 1.597794919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.776687  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:54.941378  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:55.010057  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.010106  285837 retry.go:31] will retry after 1.576792043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.276187  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:55.420648  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:55.480113  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.480142  285837 retry.go:31] will retry after 2.26666641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.776309  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:56.242125  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:56.276562  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:56.308877  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.308912  285837 retry.go:31] will retry after 2.70852063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.587192  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:56.650840  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.650869  285837 retry.go:31] will retry after 1.746680045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.776898  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:57.276239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:57.747110  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:57.776721  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:57.808824  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:57.808896  285837 retry.go:31] will retry after 3.338979851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.276183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:58.397695  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:58.460604  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.460637  285837 retry.go:31] will retry after 1.622921048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.776104  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:59.018609  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:06:59.122924  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:59.122951  285837 retry.go:31] will retry after 3.647698418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:59.276167  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:58.523051  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:01.022919  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:59.776456  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:00.084206  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:00.276658  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:00.330895  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:00.330933  285837 retry.go:31] will retry after 4.848981129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:00.776778  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:01.148539  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:01.211860  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:01.211894  285837 retry.go:31] will retry after 4.161832977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:01.277039  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:01.776560  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:02.276839  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:02.771686  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:07:02.776972  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:02.901393  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:02.901424  285837 retry.go:31] will retry after 5.549971544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:03.276936  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:03.776830  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:04.276724  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:03.522677  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:05.522772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:04.777224  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:05.180067  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:07:05.247404  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.247439  285837 retry.go:31] will retry after 4.476695877s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.276547  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:05.374229  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:05.433759  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.433787  285837 retry.go:31] will retry after 4.37892264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.776166  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:06.276368  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:06.776601  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:07.276152  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:07.777077  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:08.277179  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:08.451866  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:08.512981  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:08.513027  285837 retry.go:31] will retry after 9.372893328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:08.776155  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:09.276770  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:08.022655  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:10.022768  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:09.724392  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:09.776822  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:09.785453  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.785488  285837 retry.go:31] will retry after 5.955337388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.813514  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:09.876563  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.876594  285837 retry.go:31] will retry after 6.585328869s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:10.276122  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:10.776152  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:11.276997  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:11.776748  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:12.276867  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:12.777071  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:13.276725  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:13.776915  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:14.276832  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:12.022989  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:14.522670  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:14.777034  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:15.277144  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:15.741108  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:15.776723  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:15.809076  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:15.809111  285837 retry.go:31] will retry after 8.411412429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.276706  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:16.462334  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:16.524133  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.524164  285837 retry.go:31] will retry after 16.275248342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.776613  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.276278  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.776240  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.886523  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:17.954531  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:17.954562  285837 retry.go:31] will retry after 10.907278655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:18.276175  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:18.776243  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:19.276722  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:17.022862  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:19.522763  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:21.522806  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:19.776239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:20.276570  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:20.776244  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:21.277087  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:21.776477  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:22.276171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:22.777167  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:23.276540  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:23.776720  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:24.220799  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:24.276447  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:24.283800  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:24.283834  285837 retry.go:31] will retry after 19.949258949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:07:24.022701  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:26.023564  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:24.776758  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:25.276211  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:25.776711  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:26.276227  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:26.776716  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:27.276229  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:27.776183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.276941  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.776226  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.862833  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:28.922616  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:28.922648  285837 retry.go:31] will retry after 8.454738907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:29.277083  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:28.522731  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:30.522938  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:29.776182  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:30.277060  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:30.776835  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:31.276746  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:31.776414  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.276209  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.776715  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.799816  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:32.901801  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:32.901845  285837 retry.go:31] will retry after 14.65260505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:33.276216  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:33.776222  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:34.276756  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:33.022800  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:35.522770  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:34.776764  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:35.277073  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:35.776211  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:36.276331  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:36.776510  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:37.276183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:37.378406  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:37.440661  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:37.440691  285837 retry.go:31] will retry after 16.048870296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:37.776113  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:38.276917  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:38.776758  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:39.276296  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:38.022809  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:40.522836  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:39.776735  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:40.276749  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:40.777116  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:41.277172  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:41.776857  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:42.277141  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:42.776207  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:43.276171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:43.776690  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:44.233363  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:44.276911  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:44.294603  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:44.294641  285837 retry.go:31] will retry after 45.098120748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:07:42.523034  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:45.022823  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:44.776742  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:45.276466  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:45.776133  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:46.280870  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:46.776232  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:47.276987  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:47.554729  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:47.616803  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:47.616837  285837 retry.go:31] will retry after 38.754607023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:47.776168  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:48.276203  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:48.776412  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:49.276189  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:47.022949  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:49.522878  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:49.776177  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:50.277157  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:50.776201  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:51.276146  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:51.776144  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:51.776242  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:51.804204  285837 cri.go:89] found id: ""
	I1213 10:07:51.804236  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.804246  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:51.804253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:51.804314  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:51.829636  285837 cri.go:89] found id: ""
	I1213 10:07:51.829669  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.829679  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:51.829685  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:51.829745  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:51.857487  285837 cri.go:89] found id: ""
	I1213 10:07:51.857510  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.857519  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:51.857525  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:51.857590  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:51.881972  285837 cri.go:89] found id: ""
	I1213 10:07:51.881998  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.882006  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:51.882012  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:51.882072  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:51.906050  285837 cri.go:89] found id: ""
	I1213 10:07:51.906074  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.906083  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:51.906089  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:51.906149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:51.930678  285837 cri.go:89] found id: ""
	I1213 10:07:51.930700  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.930708  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:51.930715  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:51.930774  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:51.955590  285837 cri.go:89] found id: ""
	I1213 10:07:51.955661  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.955683  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:51.955701  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:51.955786  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:51.979349  285837 cri.go:89] found id: ""
	I1213 10:07:51.979374  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.979382  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:51.979391  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:51.979405  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:52.048255  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:52.039592    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.040290    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.041824    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.042312    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.043873    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:52.039592    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.040290    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.041824    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.042312    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.043873    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:52.048276  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:52.048290  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:52.074149  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:52.074187  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:07:52.103113  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:52.103142  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:52.161764  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:52.161797  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:53.489865  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:53.547700  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:53.547730  285837 retry.go:31] will retry after 48.398435893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:07:52.022714  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:54.023780  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:56.522671  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:54.676402  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:54.686866  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:54.686943  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:54.716493  285837 cri.go:89] found id: ""
	I1213 10:07:54.716514  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.716523  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:54.716529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:54.716584  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:54.740751  285837 cri.go:89] found id: ""
	I1213 10:07:54.740778  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.740787  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:54.740797  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:54.740854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:54.763680  285837 cri.go:89] found id: ""
	I1213 10:07:54.763703  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.763712  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:54.763717  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:54.763773  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:54.787504  285837 cri.go:89] found id: ""
	I1213 10:07:54.787556  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.787564  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:54.787570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:54.787626  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:54.812200  285837 cri.go:89] found id: ""
	I1213 10:07:54.812222  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.812231  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:54.812253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:54.812314  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:54.841586  285837 cri.go:89] found id: ""
	I1213 10:07:54.841613  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.841623  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:54.841629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:54.841687  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:54.865631  285837 cri.go:89] found id: ""
	I1213 10:07:54.865658  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.865667  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:54.865673  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:54.865731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:54.889746  285837 cri.go:89] found id: ""
	I1213 10:07:54.889773  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.889782  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:54.889792  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:54.889803  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:54.945120  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:54.945155  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:54.958121  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:54.958145  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:55.027564  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:55.017674    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.018407    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.020402    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.021114    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.022964    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:55.017674    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.018407    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.020402    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.021114    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.022964    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:55.027592  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:55.027605  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:55.053752  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:55.053788  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:07:57.584821  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:57.597676  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:57.597774  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:57.621661  285837 cri.go:89] found id: ""
	I1213 10:07:57.621684  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.621692  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:57.621699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:57.621756  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:57.649006  285837 cri.go:89] found id: ""
	I1213 10:07:57.649028  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.649036  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:57.649042  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:57.649107  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:57.672839  285837 cri.go:89] found id: ""
	I1213 10:07:57.672866  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.672875  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:57.672881  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:57.672937  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:57.697343  285837 cri.go:89] found id: ""
	I1213 10:07:57.697366  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.697375  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:57.697381  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:57.697447  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:57.722254  285837 cri.go:89] found id: ""
	I1213 10:07:57.722276  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.722284  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:57.722291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:57.722346  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:57.746125  285837 cri.go:89] found id: ""
	I1213 10:07:57.746150  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.746159  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:57.746165  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:57.746220  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:57.770612  285837 cri.go:89] found id: ""
	I1213 10:07:57.770679  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.770702  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:57.770720  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:57.770799  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:57.795253  285837 cri.go:89] found id: ""
	I1213 10:07:57.795277  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.795285  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:57.795294  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:57.795320  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:57.852923  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:57.852957  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:57.866320  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:57.866350  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:57.930573  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:57.921741    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.922259    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.923923    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.925244    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.926323    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:57.921741    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.922259    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.923923    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.925244    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.926323    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:57.930596  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:57.930609  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:57.955644  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:57.955687  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:07:58.522782  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:00.523382  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:00.485873  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:00.498933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:00.499039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:00.588348  285837 cri.go:89] found id: ""
	I1213 10:08:00.588373  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.588383  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:00.588403  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:00.588480  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:00.632508  285837 cri.go:89] found id: ""
	I1213 10:08:00.632581  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.632604  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:00.632623  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:00.632721  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:00.659204  285837 cri.go:89] found id: ""
	I1213 10:08:00.659231  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.659240  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:00.659246  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:00.659303  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:00.685440  285837 cri.go:89] found id: ""
	I1213 10:08:00.685468  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.685477  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:00.685492  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:00.685551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:00.710692  285837 cri.go:89] found id: ""
	I1213 10:08:00.710719  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.710728  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:00.710734  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:00.710791  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:00.736661  285837 cri.go:89] found id: ""
	I1213 10:08:00.736683  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.736692  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:00.736698  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:00.736766  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:00.761591  285837 cri.go:89] found id: ""
	I1213 10:08:00.761617  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.761627  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:00.761634  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:00.761695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:00.786438  285837 cri.go:89] found id: ""
	I1213 10:08:00.786465  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.786474  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:00.786484  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:00.786494  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:00.842291  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:00.842327  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:00.855993  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:00.856020  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:00.925840  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:00.916441    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918096    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918752    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920335    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920931    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:00.916441    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918096    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918752    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920335    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920931    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:00.925874  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:00.925888  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:00.953015  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:00.953064  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:03.486172  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:03.496591  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:03.496662  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:03.534940  285837 cri.go:89] found id: ""
	I1213 10:08:03.534964  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.534973  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:03.534979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:03.535038  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:03.598662  285837 cri.go:89] found id: ""
	I1213 10:08:03.598688  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.598698  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:03.598704  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:03.598766  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:03.624092  285837 cri.go:89] found id: ""
	I1213 10:08:03.624114  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.624122  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:03.624129  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:03.624188  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:03.649153  285837 cri.go:89] found id: ""
	I1213 10:08:03.649176  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.649185  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:03.649196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:03.649255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:03.673710  285837 cri.go:89] found id: ""
	I1213 10:08:03.673778  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.673802  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:03.673822  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:03.673901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:03.698952  285837 cri.go:89] found id: ""
	I1213 10:08:03.698978  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.699004  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:03.699011  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:03.699076  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:03.723499  285837 cri.go:89] found id: ""
	I1213 10:08:03.723548  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.723558  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:03.723563  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:03.723626  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:03.748795  285837 cri.go:89] found id: ""
	I1213 10:08:03.748819  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.748828  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:03.748837  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:03.748848  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:03.812342  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:03.803601    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.804143    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.805763    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.806297    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.807979    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:03.803601    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.804143    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.805763    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.806297    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.807979    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:03.812368  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:03.812388  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:03.841166  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:03.841206  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:03.871116  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:03.871146  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:03.927807  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:03.927839  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 10:08:03.022774  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:05.522704  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:06.441780  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:06.452228  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:06.452309  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:06.476347  285837 cri.go:89] found id: ""
	I1213 10:08:06.476370  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.476378  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:06.476384  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:06.476441  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:06.504937  285837 cri.go:89] found id: ""
	I1213 10:08:06.504961  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.504970  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:06.504977  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:06.505037  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:06.553519  285837 cri.go:89] found id: ""
	I1213 10:08:06.553545  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.553553  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:06.553559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:06.553619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:06.608223  285837 cri.go:89] found id: ""
	I1213 10:08:06.608249  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.608258  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:06.608264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:06.608322  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:06.639732  285837 cri.go:89] found id: ""
	I1213 10:08:06.639801  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.639816  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:06.639823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:06.639886  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:06.668074  285837 cri.go:89] found id: ""
	I1213 10:08:06.668099  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.668108  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:06.668114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:06.668190  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:06.691695  285837 cri.go:89] found id: ""
	I1213 10:08:06.691720  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.691729  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:06.691735  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:06.691801  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:06.717093  285837 cri.go:89] found id: ""
	I1213 10:08:06.717120  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.717129  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:06.717140  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:06.717152  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:06.773552  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:06.773584  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:06.787064  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:06.787090  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:06.854164  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:06.846017    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.846675    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848150    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848641    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.850183    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:06.846017    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.846675    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848150    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848641    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.850183    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:06.854189  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:06.854202  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:06.879668  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:06.879702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:08:08.022653  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:10.022701  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:09.406742  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:09.417411  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:09.417484  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:09.442113  285837 cri.go:89] found id: ""
	I1213 10:08:09.442138  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.442147  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:09.442153  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:09.442218  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:09.466316  285837 cri.go:89] found id: ""
	I1213 10:08:09.466342  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.466351  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:09.466357  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:09.466415  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:09.491678  285837 cri.go:89] found id: ""
	I1213 10:08:09.491703  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.491712  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:09.491718  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:09.491776  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:09.515316  285837 cri.go:89] found id: ""
	I1213 10:08:09.515337  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.515346  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:09.515352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:09.515410  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:09.567095  285837 cri.go:89] found id: ""
	I1213 10:08:09.567116  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.567125  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:09.567131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:09.567197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:09.616045  285837 cri.go:89] found id: ""
	I1213 10:08:09.616067  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.616076  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:09.616082  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:09.616142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:09.640449  285837 cri.go:89] found id: ""
	I1213 10:08:09.640479  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.640488  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:09.640495  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:09.640555  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:09.664888  285837 cri.go:89] found id: ""
	I1213 10:08:09.664912  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.664921  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:09.664930  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:09.664941  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:09.691077  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:09.691106  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:09.747246  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:09.747280  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:09.761112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:09.761140  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:09.830659  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:09.821800    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.822624    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.824372    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.825020    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.826720    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:09.821800    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.822624    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.824372    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.825020    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.826720    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:09.830682  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:09.830695  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:12.356184  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:12.368119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:12.368203  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:12.394250  285837 cri.go:89] found id: ""
	I1213 10:08:12.394279  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.394291  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:12.394298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:12.394365  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:12.419062  285837 cri.go:89] found id: ""
	I1213 10:08:12.419086  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.419095  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:12.419102  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:12.419159  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:12.446274  285837 cri.go:89] found id: ""
	I1213 10:08:12.446300  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.446308  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:12.446315  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:12.446371  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:12.469875  285837 cri.go:89] found id: ""
	I1213 10:08:12.469901  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.469910  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:12.469917  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:12.469977  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:12.495108  285837 cri.go:89] found id: ""
	I1213 10:08:12.495136  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.495145  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:12.495152  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:12.495207  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:12.521169  285837 cri.go:89] found id: ""
	I1213 10:08:12.521190  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.521198  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:12.521204  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:12.521258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:12.557387  285837 cri.go:89] found id: ""
	I1213 10:08:12.557412  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.557421  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:12.557427  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:12.557483  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:12.586888  285837 cri.go:89] found id: ""
	I1213 10:08:12.586913  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.586922  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:12.586931  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:12.586942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:12.654328  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:12.654361  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:12.668044  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:12.668071  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:12.737226  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:12.728433    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.729118    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.730862    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.731560    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.733263    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:12.728433    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.729118    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.730862    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.731560    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.733263    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:12.737248  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:12.737261  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:12.762749  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:12.762783  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:08:12.022956  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:14.522703  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:15.289142  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:15.301958  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:15.302029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:15.330317  285837 cri.go:89] found id: ""
	I1213 10:08:15.330344  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.330353  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:15.330359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:15.330423  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:15.358090  285837 cri.go:89] found id: ""
	I1213 10:08:15.358115  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.358124  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:15.358130  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:15.358187  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:15.382832  285837 cri.go:89] found id: ""
	I1213 10:08:15.382862  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.382871  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:15.382877  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:15.382940  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:15.409515  285837 cri.go:89] found id: ""
	I1213 10:08:15.409539  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.409549  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:15.409555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:15.409613  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:15.433885  285837 cri.go:89] found id: ""
	I1213 10:08:15.433911  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.433920  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:15.433926  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:15.433989  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:15.458618  285837 cri.go:89] found id: ""
	I1213 10:08:15.458643  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.458653  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:15.458659  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:15.458715  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:15.482592  285837 cri.go:89] found id: ""
	I1213 10:08:15.482616  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.482625  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:15.482635  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:15.482693  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:15.511125  285837 cri.go:89] found id: ""
	I1213 10:08:15.511153  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.511163  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:15.511172  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:15.511183  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:15.584797  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:15.584833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:15.598725  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:15.598752  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:15.681678  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:15.672277    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.673044    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.674516    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.675095    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.676657    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:15.672277    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.673044    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.674516    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.675095    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.676657    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:15.681701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:15.681714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:15.707610  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:15.707646  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:18.235184  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:18.246689  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:18.246762  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:18.271129  285837 cri.go:89] found id: ""
	I1213 10:08:18.271155  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.271165  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:18.271172  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:18.271240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:18.296110  285837 cri.go:89] found id: ""
	I1213 10:08:18.296135  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.296144  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:18.296150  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:18.296208  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:18.321267  285837 cri.go:89] found id: ""
	I1213 10:08:18.321290  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.321304  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:18.321311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:18.321368  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:18.349274  285837 cri.go:89] found id: ""
	I1213 10:08:18.349300  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.349309  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:18.349315  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:18.349414  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:18.373235  285837 cri.go:89] found id: ""
	I1213 10:08:18.373310  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.373325  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:18.373335  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:18.373395  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:18.397157  285837 cri.go:89] found id: ""
	I1213 10:08:18.397181  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.397190  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:18.397196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:18.397283  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:18.421144  285837 cri.go:89] found id: ""
	I1213 10:08:18.421168  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.421177  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:18.421184  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:18.421243  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:18.449567  285837 cri.go:89] found id: ""
	I1213 10:08:18.449643  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.449659  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:18.449670  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:18.449682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:18.505803  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:18.505836  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:18.520075  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:18.520099  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:18.640681  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:18.632737    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.633402    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.634626    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.635073    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.636570    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:18.632737    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.633402    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.634626    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.635073    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.636570    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:18.640706  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:18.640720  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:18.666166  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:18.666201  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:08:17.022884  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:19.522795  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:20.031934  279351 node_ready.go:38] duration metric: took 6m0.009733727s for node "no-preload-328069" to be "Ready" ...
	I1213 10:08:20.035146  279351 out.go:203] 
	W1213 10:08:20.038039  279351 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:08:20.038064  279351 out.go:285] * 
	W1213 10:08:20.040199  279351 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:08:20.043110  279351 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953597846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953660485Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953767875Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953849370Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953910048Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953970676Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954065652Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954126674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954193457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954302308Z" level=info msg="Connect containerd service"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954668147Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.955354550Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.966201405Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.966268516Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.966298096Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.966337842Z" level=info msg="Start recovering state"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985180374Z" level=info msg="Start event monitor"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985226668Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985236646Z" level=info msg="Start streaming server"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985245721Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985254336Z" level=info msg="runtime interface starting up..."
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985260564Z" level=info msg="starting plugins..."
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985290447Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:02:17 no-preload-328069 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.987150021Z" level=info msg="containerd successfully booted in 0.060163s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:21.197688    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.198516    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.200230    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.200521    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.202398    3891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 10:08:21 up  1:50,  0 user,  load average: 0.37, 0.50, 1.20
	Linux no-preload-328069 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:08:17 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:08:18 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 13 10:08:18 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:08:18 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:08:18 no-preload-328069 kubelet[3769]: E1213 10:08:18.584739    3769 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:08:18 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:08:18 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:08:19 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 13 10:08:19 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:08:19 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:08:19 no-preload-328069 kubelet[3774]: E1213 10:08:19.340104    3774 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:08:19 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:08:19 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:08:20 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 13 10:08:20 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:08:20 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:08:20 no-preload-328069 kubelet[3779]: E1213 10:08:20.149688    3779 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:08:20 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:08:20 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:08:20 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 13 10:08:20 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:08:20 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:08:20 no-preload-328069 kubelet[3802]: E1213 10:08:20.847235    3802 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:08:20 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:08:20 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069: exit status 2 (439.551012ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-328069" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (101.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-987495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 10:05:11.200809    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:05:38.905503    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:06:40.004911    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-987495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m39.621283755s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-987495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-987495
helpers_test.go:244: (dbg) docker inspect newest-cni-987495:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac",
	        "Created": "2025-12-13T09:56:44.68064601Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 271479,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:56:44.745643975Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/hosts",
	        "LogPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac-json.log",
	        "Name": "/newest-cni-987495",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-987495:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-987495",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac",
	                "LowerDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-987495",
	                "Source": "/var/lib/docker/volumes/newest-cni-987495/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-987495",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-987495",
	                "name.minikube.sigs.k8s.io": "newest-cni-987495",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8379243b191e450952047cb2444adc94946b4951abd396603cd88d0baeaa0bc8",
	            "SandboxKey": "/var/run/docker/netns/8379243b191e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-987495": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:c3:b8:48:db:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1cc05b29a6a537694a06e8a33e1431f6867104db51c8eb4299d9f9f07c01c4",
	                    "EndpointID": "6785b1ba4a8acc1a6b6d8f39bbe13572d604692626753d08e29f1862fd47e00f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-987495",
	                        "5d45a23b08cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-987495 -n newest-cni-987495
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-987495 -n newest-cni-987495: exit status 6 (350.404206ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:06:41.571871  285307 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-987495" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-987495 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-238987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ stop    │ -p embed-certs-238987 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-238987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:52 UTC │
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:53 UTC │
	│ image   │ embed-certs-238987 image list --format=json                                                                                                                                                                                                                │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-544967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ stop    │ -p default-k8s-diff-port-544967 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-544967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-544967 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-328069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:00 UTC │                     │
	│ stop    │ -p no-preload-328069 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │ 13 Dec 25 10:02 UTC │
	│ addons  │ enable dashboard -p no-preload-328069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │ 13 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-987495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:05 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:02:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:02:11.945228  279351 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:02:11.945357  279351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:02:11.945368  279351 out.go:374] Setting ErrFile to fd 2...
	I1213 10:02:11.945373  279351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:02:11.945614  279351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 10:02:11.945995  279351 out.go:368] Setting JSON to false
	I1213 10:02:11.946845  279351 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6284,"bootTime":1765613848,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:02:11.946916  279351 start.go:143] virtualization:  
	I1213 10:02:11.952053  279351 out.go:179] * [no-preload-328069] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:02:11.955099  279351 notify.go:221] Checking for updates...
	I1213 10:02:11.955646  279351 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:02:11.958871  279351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:02:11.961865  279351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:02:11.964714  279351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 10:02:11.967733  279351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:02:11.970563  279351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:02:11.973905  279351 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:02:11.974462  279351 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:02:11.997403  279351 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:02:11.997517  279351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:02:12.056888  279351 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:02:12.046991024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:02:12.057004  279351 docker.go:319] overlay module found
	I1213 10:02:12.060124  279351 out.go:179] * Using the docker driver based on existing profile
	I1213 10:02:12.062920  279351 start.go:309] selected driver: docker
	I1213 10:02:12.062939  279351 start.go:927] validating driver "docker" against &{Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:02:12.063028  279351 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:02:12.063866  279351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:02:12.125598  279351 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:02:12.116735082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:02:12.125931  279351 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:02:12.125965  279351 cni.go:84] Creating CNI manager for ""
	I1213 10:02:12.126013  279351 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:02:12.126061  279351 start.go:353] cluster config:
	{Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:02:12.130988  279351 out.go:179] * Starting "no-preload-328069" primary control-plane node in "no-preload-328069" cluster
	I1213 10:02:12.133837  279351 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:02:12.136720  279351 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:02:12.139557  279351 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:02:12.139700  279351 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/config.json ...
	I1213 10:02:12.140016  279351 cache.go:107] acquiring lock: {Name:mk1139c6b82931eb02e4fc01be1646c4b5fb6137 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140101  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 10:02:12.140115  279351 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.272µs
	I1213 10:02:12.140129  279351 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 10:02:12.140147  279351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:02:12.140331  279351 cache.go:107] acquiring lock: {Name:mkdbfdeb98feed2961bb0c3f8a6d24ab310632c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140399  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 10:02:12.140411  279351 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 85.319µs
	I1213 10:02:12.140418  279351 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 10:02:12.140432  279351 cache.go:107] acquiring lock: {Name:mke9e3c7a7c5dbec5022163863159aa6109df603 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140467  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 10:02:12.140476  279351 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 47.475µs
	I1213 10:02:12.140483  279351 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 10:02:12.140493  279351 cache.go:107] acquiring lock: {Name:mkc53cc9694a66de0b7b66cb687f9b4074b3c86b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140525  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 10:02:12.140535  279351 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 42.659µs
	I1213 10:02:12.140542  279351 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 10:02:12.140552  279351 cache.go:107] acquiring lock: {Name:mk349a8caa03fed06b3fb3e0b39b00347dcb9b37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140580  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 10:02:12.140590  279351 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 38.45µs
	I1213 10:02:12.140596  279351 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 10:02:12.140607  279351 cache.go:107] acquiring lock: {Name:mk3eb587f4f7424524980a5884c47c318ddc6f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140639  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 10:02:12.140648  279351 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 41.723µs
	I1213 10:02:12.140653  279351 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 10:02:12.140663  279351 cache.go:107] acquiring lock: {Name:mk0e27a2c36e6dbaae7432bc4e472a6212c75814 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140693  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 10:02:12.140711  279351 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.993µs
	I1213 10:02:12.140720  279351 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 10:02:12.140730  279351 cache.go:107] acquiring lock: {Name:mk07cf085b7776efa96cbbe85a2f7495a2806d09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.140801  279351 cache.go:115] /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 10:02:12.140813  279351 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 83.981µs
	I1213 10:02:12.140820  279351 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 10:02:12.140827  279351 cache.go:87] Successfully saved all images to host disk.
	I1213 10:02:12.158842  279351 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:02:12.158865  279351 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:02:12.158888  279351 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:02:12.158915  279351 start.go:360] acquireMachinesLock for no-preload-328069: {Name:mkb27df066f9039321ce696d5a7013e52143011a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:02:12.158977  279351 start.go:364] duration metric: took 42.741µs to acquireMachinesLock for "no-preload-328069"
	I1213 10:02:12.158998  279351 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:02:12.159006  279351 fix.go:54] fixHost starting: 
	I1213 10:02:12.159253  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:12.176273  279351 fix.go:112] recreateIfNeeded on no-preload-328069: state=Stopped err=<nil>
	W1213 10:02:12.176305  279351 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:02:12.181446  279351 out.go:252] * Restarting existing docker container for "no-preload-328069" ...
	I1213 10:02:12.181532  279351 cli_runner.go:164] Run: docker start no-preload-328069
	I1213 10:02:12.462743  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:12.496878  279351 kic.go:430] container "no-preload-328069" state is running.
	I1213 10:02:12.497965  279351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-328069
	I1213 10:02:12.519887  279351 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/config.json ...
	I1213 10:02:12.520284  279351 machine.go:94] provisionDockerMachine start ...
	I1213 10:02:12.520377  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:12.540812  279351 main.go:143] libmachine: Using SSH client type: native
	I1213 10:02:12.541137  279351 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1213 10:02:12.541152  279351 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:02:12.541877  279351 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:02:15.695176  279351 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-328069
	
	I1213 10:02:15.695202  279351 ubuntu.go:182] provisioning hostname "no-preload-328069"
	I1213 10:02:15.695302  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:15.713225  279351 main.go:143] libmachine: Using SSH client type: native
	I1213 10:02:15.713580  279351 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1213 10:02:15.713597  279351 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-328069 && echo "no-preload-328069" | sudo tee /etc/hostname
	I1213 10:02:15.876751  279351 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-328069
	
	I1213 10:02:15.876830  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:15.894850  279351 main.go:143] libmachine: Using SSH client type: native
	I1213 10:02:15.895176  279351 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1213 10:02:15.895200  279351 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-328069' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-328069/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-328069' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:02:16.048412  279351 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:02:16.048436  279351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 10:02:16.048458  279351 ubuntu.go:190] setting up certificates
	I1213 10:02:16.048468  279351 provision.go:84] configureAuth start
	I1213 10:02:16.048553  279351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-328069
	I1213 10:02:16.075718  279351 provision.go:143] copyHostCerts
	I1213 10:02:16.075798  279351 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 10:02:16.075813  279351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 10:02:16.075907  279351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 10:02:16.076022  279351 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 10:02:16.076028  279351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 10:02:16.076054  279351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 10:02:16.076133  279351 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 10:02:16.076138  279351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 10:02:16.076163  279351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 10:02:16.076218  279351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.no-preload-328069 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-328069]
	I1213 10:02:16.381103  279351 provision.go:177] copyRemoteCerts
	I1213 10:02:16.381179  279351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:02:16.381229  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.401342  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:16.507428  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:02:16.525230  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:02:16.542799  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:02:16.561062  279351 provision.go:87] duration metric: took 512.572112ms to configureAuth
	I1213 10:02:16.561095  279351 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:02:16.561318  279351 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:02:16.561332  279351 machine.go:97] duration metric: took 4.041034442s to provisionDockerMachine
	I1213 10:02:16.561341  279351 start.go:293] postStartSetup for "no-preload-328069" (driver="docker")
	I1213 10:02:16.561352  279351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:02:16.561415  279351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:02:16.561466  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.581239  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:16.687645  279351 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:02:16.691142  279351 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:02:16.691212  279351 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:02:16.691231  279351 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 10:02:16.691302  279351 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 10:02:16.691382  279351 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 10:02:16.691493  279351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:02:16.698909  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:02:16.716254  279351 start.go:296] duration metric: took 154.898803ms for postStartSetup
	I1213 10:02:16.716393  279351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:02:16.716444  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.733818  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:16.836603  279351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:02:16.841822  279351 fix.go:56] duration metric: took 4.68280802s for fixHost
	I1213 10:02:16.841848  279351 start.go:83] releasing machines lock for "no-preload-328069", held for 4.682859762s
	I1213 10:02:16.841920  279351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-328069
	I1213 10:02:16.859796  279351 ssh_runner.go:195] Run: cat /version.json
	I1213 10:02:16.859857  279351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:02:16.859863  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.859911  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:16.883792  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:16.886103  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:17.082036  279351 ssh_runner.go:195] Run: systemctl --version
	I1213 10:02:17.088528  279351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:02:17.092773  279351 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:02:17.092838  279351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:02:17.100613  279351 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:02:17.100639  279351 start.go:496] detecting cgroup driver to use...
	I1213 10:02:17.100671  279351 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:02:17.100716  279351 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:02:17.117849  279351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:02:17.130707  279351 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:02:17.130820  279351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:02:17.146153  279351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:02:17.159452  279351 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:02:17.271735  279351 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:02:17.386128  279351 docker.go:234] disabling docker service ...
	I1213 10:02:17.386205  279351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:02:17.401329  279351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:02:17.414137  279351 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:02:17.532620  279351 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:02:17.660743  279351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:02:17.673611  279351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:02:17.687734  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:02:17.696861  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:02:17.705596  279351 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:02:17.705702  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:02:17.714350  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:02:17.723153  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:02:17.732016  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:02:17.740626  279351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:02:17.748540  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:02:17.757314  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:02:17.766110  279351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:02:17.774949  279351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:02:17.782195  279351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:02:17.789627  279351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:02:17.894369  279351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:02:17.987177  279351 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:02:17.987297  279351 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:02:17.991600  279351 start.go:564] Will wait 60s for crictl version
	I1213 10:02:17.991728  279351 ssh_runner.go:195] Run: which crictl
	I1213 10:02:17.995375  279351 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:02:18.022384  279351 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:02:18.022552  279351 ssh_runner.go:195] Run: containerd --version
	I1213 10:02:18.048621  279351 ssh_runner.go:195] Run: containerd --version
	I1213 10:02:18.076009  279351 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:02:18.078918  279351 cli_runner.go:164] Run: docker network inspect no-preload-328069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:02:18.096351  279351 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 10:02:18.100312  279351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:02:18.110269  279351 kubeadm.go:884] updating cluster {Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:02:18.110401  279351 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:02:18.110451  279351 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:02:18.137499  279351 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:02:18.137523  279351 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:02:18.137531  279351 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 10:02:18.137633  279351 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-328069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:02:18.137698  279351 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:02:18.163191  279351 cni.go:84] Creating CNI manager for ""
	I1213 10:02:18.163216  279351 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:02:18.163234  279351 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:02:18.163255  279351 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-328069 NodeName:no-preload-328069 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:02:18.163402  279351 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-328069"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:02:18.163480  279351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:02:18.171245  279351 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:02:18.171338  279351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:02:18.178895  279351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:02:18.191581  279351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:02:18.209596  279351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1213 10:02:18.222717  279351 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:02:18.227371  279351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:02:18.237443  279351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:02:18.378945  279351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:02:18.395659  279351 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069 for IP: 192.168.76.2
	I1213 10:02:18.395721  279351 certs.go:195] generating shared ca certs ...
	I1213 10:02:18.395754  279351 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:02:18.395941  279351 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 10:02:18.396012  279351 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 10:02:18.396046  279351 certs.go:257] generating profile certs ...
	I1213 10:02:18.396189  279351 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.key
	I1213 10:02:18.396294  279351 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.key.f5afe91a
	I1213 10:02:18.396360  279351 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.key
	I1213 10:02:18.396502  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 10:02:18.396559  279351 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 10:02:18.396589  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:02:18.396649  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:02:18.396703  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:02:18.396763  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 10:02:18.396836  279351 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:02:18.397509  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:02:18.418112  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:02:18.438679  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:02:18.457466  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:02:18.475034  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:02:18.492480  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:02:18.509931  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:02:18.526519  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:02:18.543688  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 10:02:18.560978  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 10:02:18.577824  279351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:02:18.595597  279351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:02:18.608319  279351 ssh_runner.go:195] Run: openssl version
	I1213 10:02:18.614518  279351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:02:18.622207  279351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:02:18.629586  279351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:02:18.633292  279351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:02:18.633355  279351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:02:18.674403  279351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:02:18.682293  279351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 10:02:18.689424  279351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 10:02:18.697040  279351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 10:02:18.700632  279351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 10:02:18.700740  279351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 10:02:18.741591  279351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:02:18.749136  279351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 10:02:18.756646  279351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 10:02:18.764252  279351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 10:02:18.768073  279351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 10:02:18.768140  279351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 10:02:18.809211  279351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:02:18.816468  279351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:02:18.820048  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:02:18.860814  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:02:18.901547  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:02:18.942314  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:02:18.983558  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:02:19.024500  279351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:02:19.067253  279351 kubeadm.go:401] StartCluster: {Name:no-preload-328069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-328069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:02:19.067362  279351 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:02:19.067437  279351 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:02:19.094782  279351 cri.go:89] found id: ""
	I1213 10:02:19.094872  279351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:02:19.102658  279351 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:02:19.102679  279351 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:02:19.102731  279351 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:02:19.110008  279351 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:02:19.110442  279351 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-328069" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:02:19.110549  279351 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-328069" cluster setting kubeconfig missing "no-preload-328069" context setting]
	I1213 10:02:19.110833  279351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:02:19.112165  279351 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:02:19.119655  279351 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 10:02:19.119686  279351 kubeadm.go:602] duration metric: took 17.001518ms to restartPrimaryControlPlane
	I1213 10:02:19.119696  279351 kubeadm.go:403] duration metric: took 52.455088ms to StartCluster
	I1213 10:02:19.119710  279351 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:02:19.119764  279351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:02:19.120342  279351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:02:19.120541  279351 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:02:19.120828  279351 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:02:19.120875  279351 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:02:19.120946  279351 addons.go:70] Setting storage-provisioner=true in profile "no-preload-328069"
	I1213 10:02:19.120959  279351 addons.go:239] Setting addon storage-provisioner=true in "no-preload-328069"
	I1213 10:02:19.120992  279351 host.go:66] Checking if "no-preload-328069" exists ...
	I1213 10:02:19.121000  279351 addons.go:70] Setting dashboard=true in profile "no-preload-328069"
	I1213 10:02:19.121019  279351 addons.go:239] Setting addon dashboard=true in "no-preload-328069"
	W1213 10:02:19.121026  279351 addons.go:248] addon dashboard should already be in state true
	I1213 10:02:19.121047  279351 host.go:66] Checking if "no-preload-328069" exists ...
	I1213 10:02:19.121443  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:19.121464  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:19.123823  279351 addons.go:70] Setting default-storageclass=true in profile "no-preload-328069"
	I1213 10:02:19.124331  279351 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-328069"
	I1213 10:02:19.125424  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:19.125429  279351 out.go:179] * Verifying Kubernetes components...
	I1213 10:02:19.128526  279351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:02:19.159919  279351 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:02:19.162662  279351 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 10:02:19.165476  279351 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 10:02:19.165500  279351 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:02:19.165540  279351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:02:19.165616  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:19.168247  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 10:02:19.168273  279351 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 10:02:19.168347  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:19.174889  279351 addons.go:239] Setting addon default-storageclass=true in "no-preload-328069"
	I1213 10:02:19.174936  279351 host.go:66] Checking if "no-preload-328069" exists ...
	I1213 10:02:19.175371  279351 cli_runner.go:164] Run: docker container inspect no-preload-328069 --format={{.State.Status}}
	I1213 10:02:19.207894  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:19.232585  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:19.238233  279351 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:02:19.238255  279351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:02:19.238316  279351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-328069
	I1213 10:02:19.263752  279351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/no-preload-328069/id_rsa Username:docker}
	I1213 10:02:19.335605  279351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:02:19.413293  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:02:19.437951  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 10:02:19.437973  279351 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 10:02:19.451798  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:02:19.498903  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 10:02:19.498969  279351 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 10:02:19.535605  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 10:02:19.535632  279351 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 10:02:19.549971  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 10:02:19.549998  279351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 10:02:19.563358  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 10:02:19.563384  279351 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 10:02:19.576961  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 10:02:19.576985  279351 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 10:02:19.590019  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 10:02:19.590047  279351 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 10:02:19.603026  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 10:02:19.603101  279351 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 10:02:19.616283  279351 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:02:19.616306  279351 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 10:02:19.629758  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:02:20.022144  279351 node_ready.go:35] waiting up to 6m0s for node "no-preload-328069" to be "Ready" ...
	W1213 10:02:20.022218  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.022247  279351 retry.go:31] will retry after 222.509243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.022338  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.022352  279351 retry.go:31] will retry after 268.916005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.022845  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.023027  279351 retry.go:31] will retry after 142.748547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.166410  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:20.226014  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.226097  279351 retry.go:31] will retry after 425.843394ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.244927  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:02:20.292349  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:20.310341  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.310377  279351 retry.go:31] will retry after 355.473376ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.349816  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.349858  279351 retry.go:31] will retry after 264.866281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.615981  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:02:20.652460  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:02:20.666962  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:20.692927  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.693006  279351 retry.go:31] will retry after 664.622012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.735811  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.735905  279351 retry.go:31] will retry after 823.814702ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:20.764147  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:20.764185  279351 retry.go:31] will retry after 778.225677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.358304  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:21.419247  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.419281  279351 retry.go:31] will retry after 462.360443ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.543454  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:02:21.560472  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:21.637848  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.637931  279351 retry.go:31] will retry after 761.466559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:21.651294  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.651336  279351 retry.go:31] will retry after 529.51866ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.882480  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:21.939004  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:21.939036  279351 retry.go:31] will retry after 1.587615767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:22.022643  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:22.181172  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:22.245389  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:22.245423  279351 retry.go:31] will retry after 1.713713268s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:22.399656  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:22.456680  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:22.456710  279351 retry.go:31] will retry after 1.136977531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:23.527628  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:02:23.594019  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:23.601576  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:23.601611  279351 retry.go:31] will retry after 1.62095546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:23.655668  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:23.655711  279351 retry.go:31] will retry after 2.767396253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:23.960301  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:24.023123  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:24.027493  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:24.027609  279351 retry.go:31] will retry after 2.083793774s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:25.223152  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:25.294507  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:25.294547  279351 retry.go:31] will retry after 3.357306592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:26.023508  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:26.111910  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:26.170217  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:26.170253  279351 retry.go:31] will retry after 1.692121147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:26.423771  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:26.478390  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:26.478420  279351 retry.go:31] will retry after 3.848755301s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:27.863247  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:27.922311  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:27.922347  279351 retry.go:31] will retry after 3.151041885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:28.522771  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:28.651995  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:28.709111  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:28.709149  279351 retry.go:31] will retry after 6.321683751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:30.328257  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:30.391917  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:30.391949  279351 retry.go:31] will retry after 2.426020497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:30.523587  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:31.074075  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:31.135665  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:31.135702  279351 retry.go:31] will retry after 5.370688496s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:32.818771  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:32.881303  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:32.881336  279351 retry.go:31] will retry after 6.291168603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:33.022961  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:35.031970  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:35.105661  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:35.105695  279351 retry.go:31] will retry after 7.37782956s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:35.523543  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:36.507591  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:36.594781  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:36.594821  279351 retry.go:31] will retry after 11.051382377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:37.523602  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:39.173293  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:39.235217  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:39.235250  279351 retry.go:31] will retry after 10.724210844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:40.022845  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:42.022965  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:42.483792  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:42.553607  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:42.553640  279351 retry.go:31] will retry after 7.978735352s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:44.522618  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:46.522815  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:47.647156  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:02:47.708591  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:47.708634  279351 retry.go:31] will retry after 13.118586966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:48.523193  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:02:49.959743  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:02:50.025078  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:50.025108  279351 retry.go:31] will retry after 20.588870551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:50.533198  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:02:50.605977  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:02:50.606015  279351 retry.go:31] will retry after 10.142953159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:02:51.022904  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:53.522602  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:55.522760  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:02:58.022714  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:00.022755  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:00.749166  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:03:00.808153  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:00.808187  279351 retry.go:31] will retry after 20.994258363s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:00.827383  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:03:00.892573  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:00.892614  279351 retry.go:31] will retry after 23.506083404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:02.022886  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:04.522818  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:07.022905  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:09.522689  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:10.615035  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:03:10.674075  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:10.674105  279351 retry.go:31] will retry after 31.171515996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:12.023028  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:14.523566  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:17.022946  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:19.522805  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:21.803099  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:03:21.862689  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:21.862723  279351 retry.go:31] will retry after 32.702784158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:22.023647  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:24.399112  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:03:24.467406  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:24.467440  279351 retry.go:31] will retry after 48.135808011s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:24.523014  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:27.022918  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:29.522877  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:32.022758  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:34.023751  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:36.522647  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:38.522730  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:41.022772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:41.846416  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:03:41.903373  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:03:41.903405  279351 retry.go:31] will retry after 36.157114494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:43.023322  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:45.023831  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:47.522729  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:50.022691  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:52.022951  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:54.522772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:03:54.566096  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:03:54.623468  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:03:54.623599  279351 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 10:03:56.523499  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:03:59.022636  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:01.022702  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:03.022778  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:05.523648  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:08.022740  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:10.522719  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:12.522937  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:04:12.604177  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:04:12.663716  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:04:12.663824  279351 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 10:04:14.523618  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:17.022816  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:04:18.061133  279351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:04:18.126667  279351 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:04:18.126767  279351 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:04:18.129674  279351 out.go:179] * Enabled addons: 
	I1213 10:04:18.132484  279351 addons.go:530] duration metric: took 1m59.011607468s for enable addons: enabled=[]
	W1213 10:04:19.522762  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:22.022958  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:24.023765  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:26.522595  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:28.522755  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:30.522923  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:33.022768  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:35.522646  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:37.522741  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:40.022884  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:42.023047  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:44.522737  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:46.522773  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:49.022679  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:51.022800  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:53.522674  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:04:55.522749  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:04:59.174754  271045 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001131308s
	I1213 10:04:59.174784  271045 kubeadm.go:319] 
	I1213 10:04:59.174866  271045 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:04:59.174909  271045 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:04:59.175039  271045 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:04:59.175055  271045 kubeadm.go:319] 
	I1213 10:04:59.175168  271045 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:04:59.175204  271045 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:04:59.175239  271045 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:04:59.175244  271045 kubeadm.go:319] 
	I1213 10:04:59.180339  271045 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:04:59.180784  271045 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:04:59.180907  271045 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:04:59.181153  271045 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:04:59.181164  271045 kubeadm.go:319] 
	I1213 10:04:59.181233  271045 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:04:59.181295  271045 kubeadm.go:403] duration metric: took 8m6.936133561s to StartCluster
	I1213 10:04:59.181332  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:04:59.181396  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:04:59.205616  271045 cri.go:89] found id: ""
	I1213 10:04:59.205641  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.205649  271045 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:04:59.205656  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:04:59.205723  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:04:59.230248  271045 cri.go:89] found id: ""
	I1213 10:04:59.230273  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.230283  271045 logs.go:284] No container was found matching "etcd"
	I1213 10:04:59.230289  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:04:59.230350  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:04:59.255440  271045 cri.go:89] found id: ""
	I1213 10:04:59.255466  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.255474  271045 logs.go:284] No container was found matching "coredns"
	I1213 10:04:59.255481  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:04:59.255559  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:04:59.279549  271045 cri.go:89] found id: ""
	I1213 10:04:59.279574  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.279583  271045 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:04:59.279590  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:04:59.279651  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:04:59.303984  271045 cri.go:89] found id: ""
	I1213 10:04:59.304010  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.304019  271045 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:04:59.304025  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:04:59.304093  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:04:59.328934  271045 cri.go:89] found id: ""
	I1213 10:04:59.328955  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.328964  271045 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:04:59.328970  271045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:04:59.329031  271045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:04:59.352693  271045 cri.go:89] found id: ""
	I1213 10:04:59.352718  271045 logs.go:282] 0 containers: []
	W1213 10:04:59.352727  271045 logs.go:284] No container was found matching "kindnet"
	I1213 10:04:59.352737  271045 logs.go:123] Gathering logs for containerd ...
	I1213 10:04:59.352748  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:04:59.389714  271045 logs.go:123] Gathering logs for container status ...
	I1213 10:04:59.389747  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:04:59.417775  271045 logs.go:123] Gathering logs for kubelet ...
	I1213 10:04:59.417803  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:04:59.476095  271045 logs.go:123] Gathering logs for dmesg ...
	I1213 10:04:59.476128  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:04:59.492802  271045 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:04:59.492834  271045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:04:59.580211  271045 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:04:59.572159    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.572949    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.574547    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.574836    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.576315    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:04:59.572159    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.572949    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.574547    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.574836    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:04:59.576315    4843 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 10:04:59.580238  271045 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001131308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:04:59.580298  271045 out.go:285] * 
	W1213 10:04:59.580386  271045 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001131308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:04:59.580407  271045 out.go:285] * 
	W1213 10:04:59.583250  271045 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:04:59.590673  271045 out.go:203] 
	W1213 10:04:59.593644  271045 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001131308s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:04:59.594239  271045 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:04:59.594323  271045 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:04:59.597653  271045 out.go:203] 
	W1213 10:04:58.022655  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:00.048727  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:02.522644  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:04.522700  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:07.022922  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:09.023757  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:11.523627  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:14.022655  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:16.523725  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:19.022668  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:21.022772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:23.522679  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:25.522731  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:27.522882  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:30.022777  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:32.522694  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:35.022664  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:37.023237  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:39.023460  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:41.522701  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:43.522742  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:46.022766  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:48.522616  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:50.522693  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:52.522740  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:55.022636  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:57.022911  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:05:59.522730  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:01.523683  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:04.023728  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:06.523424  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:09.022750  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:11.522676  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:13.522717  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:15.522993  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:18.022724  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:20.022887  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:22.022999  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:24.523627  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:27.022760  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:29.522689  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:31.523402  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:34.022707  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:36.022804  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799212375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799243784Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799281225Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799299350Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799309688Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799320347Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799336388Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799349476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799366469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799408233Z" level=info msg="Connect containerd service"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.799713418Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.800325698Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.815613763Z" level=info msg="Start subscribing containerd event"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.815835320Z" level=info msg="Start recovering state"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.815650752Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.816102900Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.853471915Z" level=info msg="Start event monitor"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.853663023Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.853738773Z" level=info msg="Start streaming server"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.853805555Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.853873831Z" level=info msg="runtime interface starting up..."
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.853935124Z" level=info msg="starting plugins..."
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.854001848Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 09:56:50 newest-cni-987495 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 09:56:50 newest-cni-987495 containerd[761]: time="2025-12-13T09:56:50.859113307Z" level=info msg="containerd successfully booted in 0.086519s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:06:42.286572    5943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:06:42.287271    5943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:06:42.289274    5943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:06:42.290115    5943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:06:42.292037    5943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 10:06:42 up  1:49,  0 user,  load average: 0.23, 0.56, 1.31
	Linux newest-cni-987495 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:06:39 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:06:40 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 454.
	Dec 13 10:06:40 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:06:40 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:06:40 newest-cni-987495 kubelet[5824]: E1213 10:06:40.084266    5824 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:06:40 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:06:40 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:06:40 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 455.
	Dec 13 10:06:40 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:06:40 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:06:40 newest-cni-987495 kubelet[5829]: E1213 10:06:40.815224    5829 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:06:40 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:06:40 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:06:41 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 456.
	Dec 13 10:06:41 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:06:41 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:06:41 newest-cni-987495 kubelet[5853]: E1213 10:06:41.547646    5853 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:06:41 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:06:41 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:06:42 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 457.
	Dec 13 10:06:42 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:06:42 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:06:42 newest-cni-987495 kubelet[5948]: E1213 10:06:42.352712    5948 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:06:42 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:06:42 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495: exit status 6 (363.612744ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:06:42.821264  285538 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-987495" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-987495" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (101.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (373.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 10:06:54.958702    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:07:14.443502    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m8.136002767s)

                                                
                                                
-- stdout --
	* [newest-cni-987495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-987495" primary control-plane node in "newest-cni-987495" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:06:44.358606  285837 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:06:44.358774  285837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:06:44.358804  285837 out.go:374] Setting ErrFile to fd 2...
	I1213 10:06:44.358810  285837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:06:44.359110  285837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 10:06:44.359584  285837 out.go:368] Setting JSON to false
	I1213 10:06:44.360505  285837 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6557,"bootTime":1765613848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:06:44.360574  285837 start.go:143] virtualization:  
	I1213 10:06:44.365480  285837 out.go:179] * [newest-cni-987495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:06:44.368718  285837 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:06:44.368777  285837 notify.go:221] Checking for updates...
	I1213 10:06:44.374649  285837 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:06:44.377632  285837 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:44.380625  285837 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 10:06:44.383607  285837 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:06:44.386498  285837 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:06:44.389949  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:44.390563  285837 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:06:44.426169  285837 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:06:44.426412  285837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:06:44.479541  285837 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:06:44.469338758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:06:44.479654  285837 docker.go:319] overlay module found
	I1213 10:06:44.482815  285837 out.go:179] * Using the docker driver based on existing profile
	I1213 10:06:44.485692  285837 start.go:309] selected driver: docker
	I1213 10:06:44.485711  285837 start.go:927] validating driver "docker" against &{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:44.485823  285837 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:06:44.486552  285837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:06:44.545256  285837 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:06:44.535101087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:06:44.545615  285837 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 10:06:44.545650  285837 cni.go:84] Creating CNI manager for ""
	I1213 10:06:44.545706  285837 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:06:44.545747  285837 start.go:353] cluster config:
	{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:44.548958  285837 out.go:179] * Starting "newest-cni-987495" primary control-plane node in "newest-cni-987495" cluster
	I1213 10:06:44.551733  285837 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:06:44.554789  285837 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:06:44.557547  285837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:06:44.557592  285837 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:06:44.557602  285837 cache.go:65] Caching tarball of preloaded images
	I1213 10:06:44.557636  285837 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:06:44.557693  285837 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:06:44.557703  285837 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:06:44.557824  285837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 10:06:44.577619  285837 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:06:44.577644  285837 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:06:44.577660  285837 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:06:44.577696  285837 start.go:360] acquireMachinesLock for newest-cni-987495: {Name:mk0b05e51288e33ec02181c33b2cba54230603c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:06:44.577756  285837 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "newest-cni-987495"
	I1213 10:06:44.577778  285837 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:06:44.577787  285837 fix.go:54] fixHost starting: 
	I1213 10:06:44.578057  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:44.595484  285837 fix.go:112] recreateIfNeeded on newest-cni-987495: state=Stopped err=<nil>
	W1213 10:06:44.595545  285837 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:06:44.598729  285837 out.go:252] * Restarting existing docker container for "newest-cni-987495" ...
	I1213 10:06:44.598811  285837 cli_runner.go:164] Run: docker start newest-cni-987495
	I1213 10:06:44.855461  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:44.880412  285837 kic.go:430] container "newest-cni-987495" state is running.
	I1213 10:06:44.880797  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:44.909497  285837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 10:06:44.909726  285837 machine.go:94] provisionDockerMachine start ...
	I1213 10:06:44.909783  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:44.930622  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:44.931232  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:44.931291  285837 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:06:44.932041  285837 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:06:48.091507  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 10:06:48.091560  285837 ubuntu.go:182] provisioning hostname "newest-cni-987495"
	I1213 10:06:48.091625  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.110757  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:48.111074  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:48.111090  285837 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-987495 && echo "newest-cni-987495" | sudo tee /etc/hostname
	I1213 10:06:48.273955  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 10:06:48.274083  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.291615  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:48.291933  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:48.291961  285837 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-987495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-987495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-987495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:06:48.443806  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:06:48.443836  285837 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 10:06:48.443909  285837 ubuntu.go:190] setting up certificates
	I1213 10:06:48.443925  285837 provision.go:84] configureAuth start
	I1213 10:06:48.444014  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:48.461447  285837 provision.go:143] copyHostCerts
	I1213 10:06:48.461529  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 10:06:48.461544  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 10:06:48.461626  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 10:06:48.461731  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 10:06:48.461744  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 10:06:48.461773  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 10:06:48.461831  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 10:06:48.461840  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 10:06:48.461873  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 10:06:48.461929  285837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.newest-cni-987495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-987495]
	I1213 10:06:48.588588  285837 provision.go:177] copyRemoteCerts
	I1213 10:06:48.588677  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:06:48.588742  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.606370  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:48.711093  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:06:48.728291  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:06:48.746238  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:06:48.763841  285837 provision.go:87] duration metric: took 319.890818ms to configureAuth
	I1213 10:06:48.763919  285837 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:06:48.764158  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:48.764172  285837 machine.go:97] duration metric: took 3.854438499s to provisionDockerMachine
	I1213 10:06:48.764181  285837 start.go:293] postStartSetup for "newest-cni-987495" (driver="docker")
	I1213 10:06:48.764199  285837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:06:48.764250  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:06:48.764297  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.781656  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:48.887571  285837 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:06:48.891032  285837 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:06:48.891062  285837 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:06:48.891074  285837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 10:06:48.891128  285837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 10:06:48.891231  285837 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 10:06:48.891336  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:06:48.898692  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:06:48.916401  285837 start.go:296] duration metric: took 152.205033ms for postStartSetup
	I1213 10:06:48.916505  285837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:06:48.916556  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.933960  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.036570  285837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:06:49.041484  285837 fix.go:56] duration metric: took 4.463690867s for fixHost
	I1213 10:06:49.041511  285837 start.go:83] releasing machines lock for "newest-cni-987495", held for 4.463742733s
	I1213 10:06:49.041581  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:49.058404  285837 ssh_runner.go:195] Run: cat /version.json
	I1213 10:06:49.058462  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:49.058542  285837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:06:49.058607  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:49.080342  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.081196  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.272327  285837 ssh_runner.go:195] Run: systemctl --version
	I1213 10:06:49.280206  285837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:06:49.285584  285837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:06:49.285649  285837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:06:49.294944  285837 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:06:49.295018  285837 start.go:496] detecting cgroup driver to use...
	I1213 10:06:49.295073  285837 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:06:49.295155  285837 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:06:49.313555  285837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:06:49.330142  285837 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:06:49.330250  285837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:06:49.347394  285837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:06:49.361017  285837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:06:49.470304  285837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:06:49.578011  285837 docker.go:234] disabling docker service ...
	I1213 10:06:49.578102  285837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:06:49.592856  285837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:06:49.605575  285837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:06:49.713643  285837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:06:49.824293  285837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:06:49.838298  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:06:49.852989  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:06:49.861909  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:06:49.870661  285837 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:06:49.870784  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:06:49.879670  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:06:49.888429  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:06:49.896909  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:06:49.905618  285837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:06:49.913163  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:06:49.921632  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:06:49.930294  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:06:49.939291  285837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:06:49.947067  285837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:06:49.954313  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:50.072981  285837 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:06:50.196904  285837 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:06:50.196994  285837 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:06:50.200903  285837 start.go:564] Will wait 60s for crictl version
	I1213 10:06:50.201048  285837 ssh_runner.go:195] Run: which crictl
	I1213 10:06:50.204672  285837 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:06:50.230484  285837 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:06:50.230603  285837 ssh_runner.go:195] Run: containerd --version
	I1213 10:06:50.250716  285837 ssh_runner.go:195] Run: containerd --version
	I1213 10:06:50.275578  285837 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:06:50.278424  285837 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:06:50.294657  285837 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 10:06:50.298351  285837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:06:50.310828  285837 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 10:06:50.313572  285837 kubeadm.go:884] updating cluster {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:06:50.313727  285837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:06:50.313810  285837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:06:50.342567  285837 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:06:50.342593  285837 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:06:50.342654  285837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:06:50.371166  285837 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:06:50.371189  285837 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:06:50.371197  285837 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 10:06:50.371299  285837 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-987495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:06:50.371378  285837 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:06:50.396100  285837 cni.go:84] Creating CNI manager for ""
	I1213 10:06:50.396123  285837 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:06:50.396165  285837 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 10:06:50.396196  285837 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-987495 NodeName:newest-cni-987495 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:06:50.396373  285837 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-987495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:06:50.396459  285837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:06:50.404329  285837 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:06:50.404398  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:06:50.411842  285837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:06:50.424649  285837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:06:50.442140  285837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 10:06:50.455154  285837 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:06:50.459006  285837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:06:50.468675  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:50.580293  285837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:06:50.596864  285837 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495 for IP: 192.168.85.2
	I1213 10:06:50.596887  285837 certs.go:195] generating shared ca certs ...
	I1213 10:06:50.596905  285837 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:50.597091  285837 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 10:06:50.597205  285837 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 10:06:50.597223  285837 certs.go:257] generating profile certs ...
	I1213 10:06:50.597356  285837 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key
	I1213 10:06:50.597436  285837 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e
	I1213 10:06:50.597506  285837 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key
	I1213 10:06:50.597658  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 10:06:50.597722  285837 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 10:06:50.597739  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:06:50.597785  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:06:50.597830  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:06:50.597864  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 10:06:50.597929  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:06:50.598639  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:06:50.618438  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:06:50.636641  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:06:50.654754  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:06:50.674470  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:06:50.692387  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:06:50.709515  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:06:50.726691  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:06:50.744316  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 10:06:50.762153  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:06:50.779459  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 10:06:50.799850  285837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:06:50.814739  285837 ssh_runner.go:195] Run: openssl version
	I1213 10:06:50.821667  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.831484  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 10:06:50.840240  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.844034  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.844100  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.885521  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:06:50.892992  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.900259  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:06:50.907747  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.911335  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.911425  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.952315  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:06:50.959952  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.967099  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 10:06:50.974300  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.977776  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.977836  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 10:06:51.019185  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:06:51.026990  285837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:06:51.031010  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:06:51.084662  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:06:51.132673  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:06:51.177864  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:06:51.221006  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:06:51.268266  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:06:51.309760  285837 kubeadm.go:401] StartCluster: {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:51.309854  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:06:51.309920  285837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:06:51.336480  285837 cri.go:89] found id: ""
	I1213 10:06:51.336643  285837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:06:51.344873  285837 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:06:51.344892  285837 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:06:51.344971  285837 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:06:51.352443  285837 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:06:51.353090  285837 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-987495" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:51.353376  285837 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-987495" cluster setting kubeconfig missing "newest-cni-987495" context setting]
	I1213 10:06:51.353816  285837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.355217  285837 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:06:51.362937  285837 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 10:06:51.363006  285837 kubeadm.go:602] duration metric: took 18.107502ms to restartPrimaryControlPlane
	I1213 10:06:51.363022  285837 kubeadm.go:403] duration metric: took 53.271819ms to StartCluster
	I1213 10:06:51.363041  285837 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.363105  285837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:51.363987  285837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.364220  285837 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:06:51.364499  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:51.364635  285837 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:06:51.364717  285837 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-987495"
	I1213 10:06:51.364742  285837 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-987495"
	I1213 10:06:51.364767  285837 addons.go:70] Setting default-storageclass=true in profile "newest-cni-987495"
	I1213 10:06:51.364819  285837 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-987495"
	I1213 10:06:51.364774  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.365187  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.365396  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.364741  285837 addons.go:70] Setting dashboard=true in profile "newest-cni-987495"
	I1213 10:06:51.365978  285837 addons.go:239] Setting addon dashboard=true in "newest-cni-987495"
	W1213 10:06:51.365987  285837 addons.go:248] addon dashboard should already be in state true
	I1213 10:06:51.366008  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.366429  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.370287  285837 out.go:179] * Verifying Kubernetes components...
	I1213 10:06:51.373474  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:51.400526  285837 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 10:06:51.404501  285837 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 10:06:51.407418  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 10:06:51.407443  285837 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 10:06:51.407622  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.417800  285837 addons.go:239] Setting addon default-storageclass=true in "newest-cni-987495"
	I1213 10:06:51.417844  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.418251  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.419100  285837 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:06:51.423855  285837 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:51.423880  285837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:06:51.423942  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.466299  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.483641  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.486041  285837 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:51.486059  285837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:06:51.486115  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.509387  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.646942  285837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:06:51.680839  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 10:06:51.680862  285837 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 10:06:51.697914  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 10:06:51.697938  285837 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 10:06:51.704518  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:51.713551  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:51.723021  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 10:06:51.723048  285837 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 10:06:51.778125  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 10:06:51.778149  285837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 10:06:51.806697  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 10:06:51.806719  285837 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 10:06:51.819170  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 10:06:51.819253  285837 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 10:06:51.832331  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 10:06:51.832355  285837 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 10:06:51.845336  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 10:06:51.845362  285837 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 10:06:51.859132  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:06:51.859155  285837 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 10:06:51.872954  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:06:52.275964  285837 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:06:52.276037  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:52.276137  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276165  285837 retry.go:31] will retry after 226.70351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.276226  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276237  285837 retry.go:31] will retry after 265.695109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.276427  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276440  285837 retry.go:31] will retry after 287.765057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.503091  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:52.542820  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:52.565377  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:52.583674  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.583713  285837 retry.go:31] will retry after 384.757306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.624746  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.624777  285837 retry.go:31] will retry after 404.862658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.656044  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.656099  285837 retry.go:31] will retry after 520.967054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.776249  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:52.969189  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:53.030822  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:53.051878  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.051909  285837 retry.go:31] will retry after 644.635232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:53.146104  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.146138  285837 retry.go:31] will retry after 713.617137ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.177278  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:53.244074  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.244105  285837 retry.go:31] will retry after 478.208285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.276451  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:53.697474  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:53.722935  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:53.763188  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.763282  285837 retry.go:31] will retry after 791.669242ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.776509  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:53.833584  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.833619  285837 retry.go:31] will retry after 1.106769375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.860665  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:53.922352  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.922382  285837 retry.go:31] will retry after 439.211444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.277094  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:54.362407  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:54.425741  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.425772  285837 retry.go:31] will retry after 994.413015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.555979  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:06:54.643378  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.643410  285837 retry.go:31] will retry after 1.597794919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.776687  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:54.941378  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:55.010057  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.010106  285837 retry.go:31] will retry after 1.576792043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.276187  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:55.420648  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:55.480113  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.480142  285837 retry.go:31] will retry after 2.26666641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.776309  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:56.242125  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:56.276562  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:56.308877  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.308912  285837 retry.go:31] will retry after 2.70852063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.587192  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:56.650840  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.650869  285837 retry.go:31] will retry after 1.746680045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.776898  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:57.276239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:57.747110  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:57.776721  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:57.808824  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:57.808896  285837 retry.go:31] will retry after 3.338979851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.276183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:58.397695  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:58.460604  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.460637  285837 retry.go:31] will retry after 1.622921048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.776104  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:59.018609  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:06:59.122924  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:59.122951  285837 retry.go:31] will retry after 3.647698418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:59.276167  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:59.776456  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:00.084206  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:00.276658  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:00.330895  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:00.330933  285837 retry.go:31] will retry after 4.848981129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:00.776778  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:01.148539  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:01.211860  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:01.211894  285837 retry.go:31] will retry after 4.161832977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:01.277039  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:01.776560  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:02.276839  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:02.771686  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:07:02.776972  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:02.901393  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:02.901424  285837 retry.go:31] will retry after 5.549971544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:03.276936  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:03.776830  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:04.276724  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:04.777224  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:05.180067  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:07:05.247404  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.247439  285837 retry.go:31] will retry after 4.476695877s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.276547  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:05.374229  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:05.433759  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.433787  285837 retry.go:31] will retry after 4.37892264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.776166  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:06.276368  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:06.776601  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:07.276152  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:07.777077  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:08.277179  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:08.451866  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:08.512981  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:08.513027  285837 retry.go:31] will retry after 9.372893328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:08.776155  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:09.276770  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:09.724392  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:09.776822  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:09.785453  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.785488  285837 retry.go:31] will retry after 5.955337388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.813514  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:09.876563  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.876594  285837 retry.go:31] will retry after 6.585328869s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:10.276122  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:10.776152  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:11.276997  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:11.776748  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:12.276867  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:12.777071  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:13.276725  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:13.776915  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:14.276832  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:14.777034  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:15.277144  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:15.741108  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:15.776723  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:15.809076  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:15.809111  285837 retry.go:31] will retry after 8.411412429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.276706  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:16.462334  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:16.524133  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.524164  285837 retry.go:31] will retry after 16.275248342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.776613  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.276278  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.776240  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.886523  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:17.954531  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:17.954562  285837 retry.go:31] will retry after 10.907278655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:18.276175  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:18.776243  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:19.276722  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:19.776239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:20.276570  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:20.776244  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:21.277087  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:21.776477  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:22.276171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:22.777167  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:23.276540  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:23.776720  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:24.220799  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:24.276447  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:24.283800  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:24.283834  285837 retry.go:31] will retry after 19.949258949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:24.776758  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:25.276211  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:25.776711  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:26.276227  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:26.776716  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:27.276229  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:27.776183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.276941  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.776226  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.862833  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:28.922616  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:28.922648  285837 retry.go:31] will retry after 8.454738907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:29.277083  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:29.776182  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:30.277060  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:30.776835  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:31.276746  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:31.776414  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.276209  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.776715  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.799816  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:32.901801  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:32.901845  285837 retry.go:31] will retry after 14.65260505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:33.276216  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:33.776222  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:34.276756  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:34.776764  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:35.277073  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:35.776211  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:36.276331  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:36.776510  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:37.276183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:37.378406  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:37.440661  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:37.440691  285837 retry.go:31] will retry after 16.048870296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:37.776113  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:38.276917  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:38.776758  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:39.276296  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:39.776735  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:40.276749  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:40.777116  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:41.277172  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:41.776857  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:42.277141  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:42.776207  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:43.276171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:43.776690  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:44.233363  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:44.276911  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:44.294603  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:44.294641  285837 retry.go:31] will retry after 45.098120748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:44.776742  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:45.276466  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:45.776133  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:46.280870  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:46.776232  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:47.276987  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:47.554729  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:47.616803  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:47.616837  285837 retry.go:31] will retry after 38.754607023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:47.776168  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:48.276203  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:48.776412  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:49.276189  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:49.776177  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:50.277157  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:50.776201  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:51.276146  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:51.776144  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:51.776242  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:51.804204  285837 cri.go:89] found id: ""
	I1213 10:07:51.804236  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.804246  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:51.804253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:51.804314  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:51.829636  285837 cri.go:89] found id: ""
	I1213 10:07:51.829669  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.829679  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:51.829685  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:51.829745  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:51.857487  285837 cri.go:89] found id: ""
	I1213 10:07:51.857510  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.857519  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:51.857525  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:51.857590  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:51.881972  285837 cri.go:89] found id: ""
	I1213 10:07:51.881998  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.882006  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:51.882012  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:51.882072  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:51.906050  285837 cri.go:89] found id: ""
	I1213 10:07:51.906074  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.906083  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:51.906089  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:51.906149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:51.930678  285837 cri.go:89] found id: ""
	I1213 10:07:51.930700  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.930708  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:51.930715  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:51.930774  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:51.955590  285837 cri.go:89] found id: ""
	I1213 10:07:51.955661  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.955683  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:51.955701  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:51.955786  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:51.979349  285837 cri.go:89] found id: ""
	I1213 10:07:51.979374  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.979382  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:51.979391  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:51.979405  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:52.048255  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:52.039592    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.040290    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.041824    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.042312    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.043873    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:52.039592    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.040290    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.041824    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.042312    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.043873    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:52.048276  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:52.048290  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:52.074149  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:52.074187  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:07:52.103113  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:52.103142  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:52.161764  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:52.161797  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:53.489865  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:53.547700  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:53.547730  285837 retry.go:31] will retry after 48.398435893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:54.676402  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:54.686866  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:54.686943  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:54.716493  285837 cri.go:89] found id: ""
	I1213 10:07:54.716514  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.716523  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:54.716529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:54.716584  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:54.740751  285837 cri.go:89] found id: ""
	I1213 10:07:54.740778  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.740787  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:54.740797  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:54.740854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:54.763680  285837 cri.go:89] found id: ""
	I1213 10:07:54.763703  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.763712  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:54.763717  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:54.763773  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:54.787504  285837 cri.go:89] found id: ""
	I1213 10:07:54.787556  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.787564  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:54.787570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:54.787626  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:54.812200  285837 cri.go:89] found id: ""
	I1213 10:07:54.812222  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.812231  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:54.812253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:54.812314  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:54.841586  285837 cri.go:89] found id: ""
	I1213 10:07:54.841613  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.841623  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:54.841629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:54.841687  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:54.865631  285837 cri.go:89] found id: ""
	I1213 10:07:54.865658  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.865667  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:54.865673  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:54.865731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:54.889746  285837 cri.go:89] found id: ""
	I1213 10:07:54.889773  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.889782  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:54.889792  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:54.889803  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:54.945120  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:54.945155  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:54.958121  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:54.958145  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:55.027564  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:55.017674    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.018407    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.020402    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.021114    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.022964    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:55.017674    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.018407    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.020402    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.021114    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.022964    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:55.027592  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:55.027605  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:55.053752  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:55.053788  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:07:57.584821  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:57.597676  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:57.597774  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:57.621661  285837 cri.go:89] found id: ""
	I1213 10:07:57.621684  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.621692  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:57.621699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:57.621756  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:57.649006  285837 cri.go:89] found id: ""
	I1213 10:07:57.649028  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.649036  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:57.649042  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:57.649107  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:57.672839  285837 cri.go:89] found id: ""
	I1213 10:07:57.672866  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.672875  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:57.672881  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:57.672937  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:57.697343  285837 cri.go:89] found id: ""
	I1213 10:07:57.697366  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.697375  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:57.697381  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:57.697447  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:57.722254  285837 cri.go:89] found id: ""
	I1213 10:07:57.722276  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.722284  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:57.722291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:57.722346  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:57.746125  285837 cri.go:89] found id: ""
	I1213 10:07:57.746150  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.746159  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:57.746165  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:57.746220  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:57.770612  285837 cri.go:89] found id: ""
	I1213 10:07:57.770679  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.770702  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:57.770720  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:57.770799  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:57.795253  285837 cri.go:89] found id: ""
	I1213 10:07:57.795277  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.795285  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:57.795294  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:57.795320  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:57.852923  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:57.852957  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:57.866320  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:57.866350  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:57.930573  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:57.921741    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.922259    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.923923    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.925244    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.926323    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:57.921741    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.922259    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.923923    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.925244    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.926323    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:57.930596  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:57.930609  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:57.955644  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:57.955687  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:00.485873  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:00.498933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:00.499039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:00.588348  285837 cri.go:89] found id: ""
	I1213 10:08:00.588373  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.588383  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:00.588403  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:00.588480  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:00.632508  285837 cri.go:89] found id: ""
	I1213 10:08:00.632581  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.632604  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:00.632623  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:00.632721  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:00.659204  285837 cri.go:89] found id: ""
	I1213 10:08:00.659231  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.659240  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:00.659246  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:00.659303  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:00.685440  285837 cri.go:89] found id: ""
	I1213 10:08:00.685468  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.685477  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:00.685492  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:00.685551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:00.710692  285837 cri.go:89] found id: ""
	I1213 10:08:00.710719  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.710728  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:00.710734  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:00.710791  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:00.736661  285837 cri.go:89] found id: ""
	I1213 10:08:00.736683  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.736692  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:00.736698  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:00.736766  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:00.761591  285837 cri.go:89] found id: ""
	I1213 10:08:00.761617  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.761627  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:00.761634  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:00.761695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:00.786438  285837 cri.go:89] found id: ""
	I1213 10:08:00.786465  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.786474  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:00.786484  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:00.786494  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:00.842291  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:00.842327  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:00.855993  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:00.856020  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:00.925840  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:00.916441    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918096    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918752    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920335    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920931    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:00.916441    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918096    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918752    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920335    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920931    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:00.925874  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:00.925888  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:00.953015  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:00.953064  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:03.486172  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:03.496591  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:03.496662  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:03.534940  285837 cri.go:89] found id: ""
	I1213 10:08:03.534964  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.534973  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:03.534979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:03.535038  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:03.598662  285837 cri.go:89] found id: ""
	I1213 10:08:03.598688  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.598698  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:03.598704  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:03.598766  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:03.624092  285837 cri.go:89] found id: ""
	I1213 10:08:03.624114  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.624122  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:03.624129  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:03.624188  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:03.649153  285837 cri.go:89] found id: ""
	I1213 10:08:03.649176  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.649185  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:03.649196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:03.649255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:03.673710  285837 cri.go:89] found id: ""
	I1213 10:08:03.673778  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.673802  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:03.673822  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:03.673901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:03.698952  285837 cri.go:89] found id: ""
	I1213 10:08:03.698978  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.699004  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:03.699011  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:03.699076  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:03.723499  285837 cri.go:89] found id: ""
	I1213 10:08:03.723548  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.723558  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:03.723563  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:03.723626  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:03.748795  285837 cri.go:89] found id: ""
	I1213 10:08:03.748819  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.748828  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:03.748837  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:03.748848  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:03.812342  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:03.803601    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.804143    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.805763    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.806297    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.807979    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:03.803601    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.804143    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.805763    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.806297    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.807979    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:03.812368  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:03.812388  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:03.841166  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:03.841206  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:03.871116  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:03.871146  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:03.927807  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:03.927839  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:06.441780  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:06.452228  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:06.452309  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:06.476347  285837 cri.go:89] found id: ""
	I1213 10:08:06.476370  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.476378  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:06.476384  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:06.476441  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:06.504937  285837 cri.go:89] found id: ""
	I1213 10:08:06.504961  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.504970  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:06.504977  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:06.505037  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:06.553519  285837 cri.go:89] found id: ""
	I1213 10:08:06.553545  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.553553  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:06.553559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:06.553619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:06.608223  285837 cri.go:89] found id: ""
	I1213 10:08:06.608249  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.608258  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:06.608264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:06.608322  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:06.639732  285837 cri.go:89] found id: ""
	I1213 10:08:06.639801  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.639816  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:06.639823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:06.639886  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:06.668074  285837 cri.go:89] found id: ""
	I1213 10:08:06.668099  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.668108  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:06.668114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:06.668190  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:06.691695  285837 cri.go:89] found id: ""
	I1213 10:08:06.691720  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.691729  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:06.691735  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:06.691801  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:06.717093  285837 cri.go:89] found id: ""
	I1213 10:08:06.717120  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.717129  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:06.717140  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:06.717152  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:06.773552  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:06.773584  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:06.787064  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:06.787090  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:06.854164  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:06.846017    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.846675    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848150    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848641    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.850183    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:06.846017    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.846675    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848150    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848641    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.850183    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:06.854189  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:06.854202  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:06.879668  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:06.879702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:09.406742  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:09.417411  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:09.417484  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:09.442113  285837 cri.go:89] found id: ""
	I1213 10:08:09.442138  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.442147  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:09.442153  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:09.442218  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:09.466316  285837 cri.go:89] found id: ""
	I1213 10:08:09.466342  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.466351  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:09.466357  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:09.466415  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:09.491678  285837 cri.go:89] found id: ""
	I1213 10:08:09.491703  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.491712  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:09.491718  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:09.491776  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:09.515316  285837 cri.go:89] found id: ""
	I1213 10:08:09.515337  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.515346  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:09.515352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:09.515410  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:09.567095  285837 cri.go:89] found id: ""
	I1213 10:08:09.567116  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.567125  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:09.567131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:09.567197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:09.616045  285837 cri.go:89] found id: ""
	I1213 10:08:09.616067  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.616076  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:09.616082  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:09.616142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:09.640449  285837 cri.go:89] found id: ""
	I1213 10:08:09.640479  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.640488  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:09.640495  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:09.640555  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:09.664888  285837 cri.go:89] found id: ""
	I1213 10:08:09.664912  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.664921  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:09.664930  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:09.664941  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:09.691077  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:09.691106  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:09.747246  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:09.747280  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:09.761112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:09.761140  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:09.830659  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:09.821800    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.822624    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.824372    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.825020    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.826720    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:09.821800    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.822624    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.824372    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.825020    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.826720    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:09.830682  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:09.830695  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:12.356184  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:12.368119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:12.368203  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:12.394250  285837 cri.go:89] found id: ""
	I1213 10:08:12.394279  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.394291  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:12.394298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:12.394365  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:12.419062  285837 cri.go:89] found id: ""
	I1213 10:08:12.419086  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.419095  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:12.419102  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:12.419159  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:12.446274  285837 cri.go:89] found id: ""
	I1213 10:08:12.446300  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.446308  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:12.446315  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:12.446371  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:12.469875  285837 cri.go:89] found id: ""
	I1213 10:08:12.469901  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.469910  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:12.469917  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:12.469977  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:12.495108  285837 cri.go:89] found id: ""
	I1213 10:08:12.495136  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.495145  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:12.495152  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:12.495207  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:12.521169  285837 cri.go:89] found id: ""
	I1213 10:08:12.521190  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.521198  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:12.521204  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:12.521258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:12.557387  285837 cri.go:89] found id: ""
	I1213 10:08:12.557412  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.557421  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:12.557427  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:12.557483  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:12.586888  285837 cri.go:89] found id: ""
	I1213 10:08:12.586913  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.586922  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:12.586931  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:12.586942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:12.654328  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:12.654361  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:12.668044  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:12.668071  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:12.737226  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:12.728433    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.729118    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.730862    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.731560    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.733263    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:12.728433    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.729118    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.730862    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.731560    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.733263    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:12.737248  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:12.737261  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:12.762749  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:12.762783  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:15.289142  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:15.301958  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:15.302029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:15.330317  285837 cri.go:89] found id: ""
	I1213 10:08:15.330344  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.330353  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:15.330359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:15.330423  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:15.358090  285837 cri.go:89] found id: ""
	I1213 10:08:15.358115  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.358124  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:15.358130  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:15.358187  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:15.382832  285837 cri.go:89] found id: ""
	I1213 10:08:15.382862  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.382871  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:15.382877  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:15.382940  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:15.409515  285837 cri.go:89] found id: ""
	I1213 10:08:15.409539  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.409549  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:15.409555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:15.409613  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:15.433885  285837 cri.go:89] found id: ""
	I1213 10:08:15.433911  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.433920  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:15.433926  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:15.433989  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:15.458618  285837 cri.go:89] found id: ""
	I1213 10:08:15.458643  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.458653  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:15.458659  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:15.458715  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:15.482592  285837 cri.go:89] found id: ""
	I1213 10:08:15.482616  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.482625  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:15.482635  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:15.482693  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:15.511125  285837 cri.go:89] found id: ""
	I1213 10:08:15.511153  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.511163  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:15.511172  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:15.511183  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:15.584797  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:15.584833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:15.598725  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:15.598752  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:15.681678  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:15.672277    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.673044    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.674516    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.675095    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.676657    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:15.672277    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.673044    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.674516    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.675095    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.676657    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:15.681701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:15.681714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:15.707610  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:15.707646  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:18.235184  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:18.246689  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:18.246762  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:18.271129  285837 cri.go:89] found id: ""
	I1213 10:08:18.271155  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.271165  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:18.271172  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:18.271240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:18.296110  285837 cri.go:89] found id: ""
	I1213 10:08:18.296135  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.296144  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:18.296150  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:18.296208  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:18.321267  285837 cri.go:89] found id: ""
	I1213 10:08:18.321290  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.321304  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:18.321311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:18.321368  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:18.349274  285837 cri.go:89] found id: ""
	I1213 10:08:18.349300  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.349309  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:18.349315  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:18.349414  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:18.373235  285837 cri.go:89] found id: ""
	I1213 10:08:18.373310  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.373325  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:18.373335  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:18.373395  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:18.397157  285837 cri.go:89] found id: ""
	I1213 10:08:18.397181  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.397190  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:18.397196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:18.397283  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:18.421144  285837 cri.go:89] found id: ""
	I1213 10:08:18.421168  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.421177  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:18.421184  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:18.421243  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:18.449567  285837 cri.go:89] found id: ""
	I1213 10:08:18.449643  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.449659  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:18.449670  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:18.449682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:18.505803  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:18.505836  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:18.520075  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:18.520099  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:18.640681  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:18.632737    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.633402    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.634626    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.635073    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.636570    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:18.632737    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.633402    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.634626    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.635073    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.636570    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:18.640706  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:18.640720  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:18.666166  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:18.666201  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:21.195745  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:21.206020  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:21.206084  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:21.246086  285837 cri.go:89] found id: ""
	I1213 10:08:21.246106  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.246115  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:21.246122  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:21.246181  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:21.273446  285837 cri.go:89] found id: ""
	I1213 10:08:21.273469  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.273477  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:21.273483  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:21.273543  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:21.312010  285837 cri.go:89] found id: ""
	I1213 10:08:21.312031  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.312040  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:21.312046  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:21.312104  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:21.357158  285837 cri.go:89] found id: ""
	I1213 10:08:21.357177  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.357185  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:21.357192  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:21.357248  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:21.398112  285837 cri.go:89] found id: ""
	I1213 10:08:21.398135  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.398143  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:21.398149  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:21.398205  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:21.447244  285837 cri.go:89] found id: ""
	I1213 10:08:21.447268  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.447276  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:21.447283  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:21.447347  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:21.495558  285837 cri.go:89] found id: ""
	I1213 10:08:21.495581  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.495589  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:21.495595  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:21.495652  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:21.555224  285837 cri.go:89] found id: ""
	I1213 10:08:21.555248  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.555257  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:21.555270  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:21.555281  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:21.627890  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:21.627922  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:21.674689  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:21.674714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:21.747238  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:21.747267  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:21.763785  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:21.763813  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:21.844164  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:21.834312    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.835883    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.836677    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838349    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838777    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:21.834312    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.835883    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.836677    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838349    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838777    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:24.345832  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:24.356414  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:24.356487  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:24.381314  285837 cri.go:89] found id: ""
	I1213 10:08:24.381340  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.381349  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:24.381356  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:24.381418  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:24.405581  285837 cri.go:89] found id: ""
	I1213 10:08:24.405606  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.405614  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:24.405621  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:24.405679  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:24.429873  285837 cri.go:89] found id: ""
	I1213 10:08:24.429895  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.429904  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:24.429911  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:24.429971  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:24.457573  285837 cri.go:89] found id: ""
	I1213 10:08:24.457600  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.457609  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:24.457616  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:24.457674  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:24.481838  285837 cri.go:89] found id: ""
	I1213 10:08:24.481865  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.481874  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:24.481880  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:24.481937  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:24.507009  285837 cri.go:89] found id: ""
	I1213 10:08:24.507034  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.507043  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:24.507049  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:24.507105  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:24.550665  285837 cri.go:89] found id: ""
	I1213 10:08:24.550687  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.550695  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:24.550702  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:24.550757  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:24.584765  285837 cri.go:89] found id: ""
	I1213 10:08:24.584787  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.584805  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:24.584815  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:24.584828  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:24.652249  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:24.643336    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.644158    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.645848    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.646388    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.648047    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:24.643336    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.644158    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.645848    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.646388    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.648047    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:24.652271  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:24.652285  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:24.677128  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:24.677161  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:24.705609  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:24.705635  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:24.761364  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:24.761399  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:26.371661  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:08:26.432065  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:08:26.432188  285837 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:08:27.285248  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:27.295647  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:27.295723  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:27.320532  285837 cri.go:89] found id: ""
	I1213 10:08:27.320555  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.320564  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:27.320570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:27.320628  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:27.344722  285837 cri.go:89] found id: ""
	I1213 10:08:27.344748  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.344758  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:27.344764  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:27.344852  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:27.370726  285837 cri.go:89] found id: ""
	I1213 10:08:27.370751  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.370760  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:27.370766  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:27.370849  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:27.394557  285837 cri.go:89] found id: ""
	I1213 10:08:27.394583  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.394617  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:27.394628  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:27.394703  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:27.418575  285837 cri.go:89] found id: ""
	I1213 10:08:27.418601  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.418610  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:27.418616  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:27.418673  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:27.444932  285837 cri.go:89] found id: ""
	I1213 10:08:27.444953  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.444962  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:27.444968  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:27.445029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:27.468135  285837 cri.go:89] found id: ""
	I1213 10:08:27.468213  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.468237  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:27.468256  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:27.468330  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:27.493054  285837 cri.go:89] found id: ""
	I1213 10:08:27.493079  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.493089  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:27.493098  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:27.493126  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:27.555066  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:27.555141  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:27.572569  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:27.572644  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:27.641611  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:27.634328    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.634722    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.635977    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.636303    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.637740    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:27.634328    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.634722    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.635977    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.636303    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.637740    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:27.641682  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:27.641704  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:27.667653  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:27.667690  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:29.393883  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:08:29.454286  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:08:29.454393  285837 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:08:30.208961  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:30.219829  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:30.219950  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:30.248442  285837 cri.go:89] found id: ""
	I1213 10:08:30.248471  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.248480  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:30.248486  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:30.248569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:30.273935  285837 cri.go:89] found id: ""
	I1213 10:08:30.273964  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.273973  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:30.273979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:30.274067  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:30.299229  285837 cri.go:89] found id: ""
	I1213 10:08:30.299256  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.299265  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:30.299271  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:30.299328  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:30.327770  285837 cri.go:89] found id: ""
	I1213 10:08:30.327792  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.327801  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:30.327807  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:30.327863  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:30.352796  285837 cri.go:89] found id: ""
	I1213 10:08:30.352851  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.352861  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:30.352867  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:30.352928  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:30.376505  285837 cri.go:89] found id: ""
	I1213 10:08:30.376530  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.376539  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:30.376546  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:30.376646  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:30.400512  285837 cri.go:89] found id: ""
	I1213 10:08:30.400536  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.400545  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:30.400551  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:30.400611  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:30.425139  285837 cri.go:89] found id: ""
	I1213 10:08:30.425162  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.425171  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:30.425181  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:30.425192  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:30.454686  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:30.454713  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:30.509531  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:30.509568  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:30.527699  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:30.527727  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:30.597883  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:30.589727    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.590173    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.591765    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.592345    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.594029    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:30.589727    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.590173    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.591765    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.592345    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.594029    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:30.597907  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:30.597920  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:33.123638  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:33.134229  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:33.134302  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:33.161169  285837 cri.go:89] found id: ""
	I1213 10:08:33.161201  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.161210  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:33.161218  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:33.161278  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:33.189591  285837 cri.go:89] found id: ""
	I1213 10:08:33.189614  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.189623  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:33.189629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:33.189691  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:33.213288  285837 cri.go:89] found id: ""
	I1213 10:08:33.213315  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.213325  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:33.213331  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:33.213388  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:33.237186  285837 cri.go:89] found id: ""
	I1213 10:08:33.237214  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.237223  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:33.237230  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:33.237291  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:33.265589  285837 cri.go:89] found id: ""
	I1213 10:08:33.265615  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.265623  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:33.265629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:33.265687  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:33.289791  285837 cri.go:89] found id: ""
	I1213 10:08:33.289862  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.289884  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:33.289902  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:33.289986  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:33.314058  285837 cri.go:89] found id: ""
	I1213 10:08:33.314085  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.314094  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:33.314099  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:33.314170  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:33.338463  285837 cri.go:89] found id: ""
	I1213 10:08:33.338490  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.338499  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:33.338509  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:33.338521  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:33.393919  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:33.393953  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:33.407152  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:33.407179  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:33.470838  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:33.463012    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.463421    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465099    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465564    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.466988    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:33.463012    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.463421    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465099    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465564    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.466988    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:33.470862  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:33.470875  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:33.495641  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:33.495672  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:36.035663  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:36.047578  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:36.047649  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:36.076122  285837 cri.go:89] found id: ""
	I1213 10:08:36.076145  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.076154  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:36.076160  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:36.076236  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:36.105524  285837 cri.go:89] found id: ""
	I1213 10:08:36.105554  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.105564  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:36.105570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:36.105629  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:36.134491  285837 cri.go:89] found id: ""
	I1213 10:08:36.134565  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.134587  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:36.134607  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:36.134695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:36.159376  285837 cri.go:89] found id: ""
	I1213 10:08:36.159449  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.159471  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:36.159489  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:36.159608  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:36.185490  285837 cri.go:89] found id: ""
	I1213 10:08:36.185565  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.185590  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:36.185604  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:36.185676  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:36.219394  285837 cri.go:89] found id: ""
	I1213 10:08:36.219422  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.219431  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:36.219438  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:36.219494  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:36.243333  285837 cri.go:89] found id: ""
	I1213 10:08:36.243357  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.243367  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:36.243373  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:36.243435  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:36.267160  285837 cri.go:89] found id: ""
	I1213 10:08:36.267187  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.267196  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:36.267206  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:36.267218  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:36.280345  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:36.280375  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:36.343250  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:36.335308    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.335892    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337458    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337934    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.339453    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:36.335308    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.335892    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337458    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337934    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.339453    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:36.343272  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:36.343284  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:36.368575  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:36.368610  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:36.395546  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:36.395573  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:38.955916  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:38.966663  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:38.966732  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:38.991698  285837 cri.go:89] found id: ""
	I1213 10:08:38.991722  285837 logs.go:282] 0 containers: []
	W1213 10:08:38.991730  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:38.991737  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:38.991795  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:39.029472  285837 cri.go:89] found id: ""
	I1213 10:08:39.029501  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.029510  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:39.029515  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:39.029610  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:39.058052  285837 cri.go:89] found id: ""
	I1213 10:08:39.058082  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.058097  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:39.058104  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:39.058165  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:39.086309  285837 cri.go:89] found id: ""
	I1213 10:08:39.086331  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.086339  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:39.086345  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:39.086407  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:39.113392  285837 cri.go:89] found id: ""
	I1213 10:08:39.113420  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.113430  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:39.113436  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:39.113497  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:39.138083  285837 cri.go:89] found id: ""
	I1213 10:08:39.138109  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.138118  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:39.138125  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:39.138182  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:39.162132  285837 cri.go:89] found id: ""
	I1213 10:08:39.162160  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.162170  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:39.162176  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:39.162239  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:39.190634  285837 cri.go:89] found id: ""
	I1213 10:08:39.190661  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.190670  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:39.190679  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:39.190691  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:39.215694  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:39.215727  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:39.246161  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:39.246189  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:39.305962  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:39.305996  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:39.319717  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:39.319744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:39.382189  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:39.374352    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.374762    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376328    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376646    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.378097    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:39.374352    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.374762    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376328    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376646    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.378097    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:41.883328  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:41.894154  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:41.894228  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:41.921476  285837 cri.go:89] found id: ""
	I1213 10:08:41.921500  285837 logs.go:282] 0 containers: []
	W1213 10:08:41.921509  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:41.921515  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:41.921573  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:41.945812  285837 cri.go:89] found id: ""
	I1213 10:08:41.945835  285837 logs.go:282] 0 containers: []
	W1213 10:08:41.945843  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:41.945849  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:41.945912  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:41.946276  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:08:41.977805  285837 cri.go:89] found id: ""
	I1213 10:08:41.977840  285837 logs.go:282] 0 containers: []
	W1213 10:08:41.977849  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:41.977855  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:41.977923  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1213 10:08:42.037880  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:08:42.037998  285837 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:08:42.038333  285837 cri.go:89] found id: ""
	I1213 10:08:42.038351  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.038357  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:42.038364  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:42.038439  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:42.041174  285837 out.go:179] * Enabled addons: 
	I1213 10:08:42.044041  285837 addons.go:530] duration metric: took 1m50.679416537s for enable addons: enabled=[]
	I1213 10:08:42.069124  285837 cri.go:89] found id: ""
	I1213 10:08:42.069158  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.069173  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:42.069181  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:42.069277  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:42.114076  285837 cri.go:89] found id: ""
	I1213 10:08:42.114106  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.114119  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:42.114129  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:42.114201  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:42.143501  285837 cri.go:89] found id: ""
	I1213 10:08:42.143577  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.143587  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:42.143594  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:42.143665  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:42.174231  285837 cri.go:89] found id: ""
	I1213 10:08:42.174258  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.174267  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:42.174278  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:42.174291  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:42.209465  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:42.209500  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:42.270663  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:42.270702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:42.286732  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:42.286769  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:42.356785  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:42.348392    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.349131    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.350759    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.351267    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.352901    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:42.348392    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.349131    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.350759    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.351267    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.352901    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:42.356809  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:42.356822  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:44.882858  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:44.893320  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:44.893392  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:44.918585  285837 cri.go:89] found id: ""
	I1213 10:08:44.918612  285837 logs.go:282] 0 containers: []
	W1213 10:08:44.918621  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:44.918628  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:44.918686  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:44.943719  285837 cri.go:89] found id: ""
	I1213 10:08:44.943746  285837 logs.go:282] 0 containers: []
	W1213 10:08:44.943755  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:44.943762  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:44.943822  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:44.968177  285837 cri.go:89] found id: ""
	I1213 10:08:44.968204  285837 logs.go:282] 0 containers: []
	W1213 10:08:44.968213  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:44.968219  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:44.968273  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:45.012025  285837 cri.go:89] found id: ""
	I1213 10:08:45.012052  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.012062  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:45.012069  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:45.012140  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:45.059717  285837 cri.go:89] found id: ""
	I1213 10:08:45.059815  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.059841  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:45.059864  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:45.059985  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:45.146429  285837 cri.go:89] found id: ""
	I1213 10:08:45.146507  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.146534  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:45.146585  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:45.146680  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:45.192650  285837 cri.go:89] found id: ""
	I1213 10:08:45.192683  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.192695  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:45.192704  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:45.192786  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:45.240936  285837 cri.go:89] found id: ""
	I1213 10:08:45.241266  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.241306  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:45.241344  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:45.241423  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:45.280178  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:45.280250  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:45.343980  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:45.344023  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:45.357799  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:45.357833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:45.421366  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:45.413373    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.413837    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415405    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415812    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.417291    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:45.413373    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.413837    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415405    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415812    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.417291    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:45.421390  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:45.421403  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:47.952239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:47.963745  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:47.963816  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:47.989230  285837 cri.go:89] found id: ""
	I1213 10:08:47.989253  285837 logs.go:282] 0 containers: []
	W1213 10:08:47.989262  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:47.989288  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:47.989360  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:48.018062  285837 cri.go:89] found id: ""
	I1213 10:08:48.018087  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.018096  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:48.018102  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:48.018165  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:48.049042  285837 cri.go:89] found id: ""
	I1213 10:08:48.049068  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.049078  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:48.049084  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:48.049147  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:48.077924  285837 cri.go:89] found id: ""
	I1213 10:08:48.077946  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.077955  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:48.077965  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:48.078023  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:48.106258  285837 cri.go:89] found id: ""
	I1213 10:08:48.106284  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.106292  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:48.106298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:48.106355  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:48.130836  285837 cri.go:89] found id: ""
	I1213 10:08:48.130861  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.130869  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:48.130883  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:48.130945  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:48.157446  285837 cri.go:89] found id: ""
	I1213 10:08:48.157470  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.157479  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:48.157485  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:48.157543  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:48.182657  285837 cri.go:89] found id: ""
	I1213 10:08:48.182687  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.182697  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:48.182707  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:48.182719  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:48.196607  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:48.196685  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:48.261824  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:48.252959    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.253768    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.255478    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.256212    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.257905    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:48.252959    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.253768    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.255478    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.256212    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.257905    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:48.261895  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:48.261914  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:48.287393  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:48.287436  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:48.318617  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:48.318647  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:50.875656  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:50.886169  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:50.886240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:50.910775  285837 cri.go:89] found id: ""
	I1213 10:08:50.910801  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.910810  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:50.910817  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:50.910874  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:50.936159  285837 cri.go:89] found id: ""
	I1213 10:08:50.936185  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.936194  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:50.936200  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:50.936262  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:50.960845  285837 cri.go:89] found id: ""
	I1213 10:08:50.960879  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.960888  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:50.960895  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:50.960956  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:50.989232  285837 cri.go:89] found id: ""
	I1213 10:08:50.989262  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.989271  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:50.989277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:50.989361  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:51.017908  285837 cri.go:89] found id: ""
	I1213 10:08:51.017936  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.017944  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:51.017950  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:51.018012  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:51.062320  285837 cri.go:89] found id: ""
	I1213 10:08:51.062355  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.062363  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:51.062369  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:51.062436  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:51.091004  285837 cri.go:89] found id: ""
	I1213 10:08:51.091038  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.091047  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:51.091053  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:51.091118  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:51.116510  285837 cri.go:89] found id: ""
	I1213 10:08:51.116543  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.116552  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:51.116561  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:51.116574  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:51.147665  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:51.147690  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:51.203425  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:51.203457  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:51.216632  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:51.216657  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:51.278157  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:51.270057    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.270725    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272274    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272754    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.274265    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:51.270057    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.270725    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272274    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272754    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.274265    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:51.278181  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:51.278195  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:53.804075  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:53.815823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:53.815894  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:53.841157  285837 cri.go:89] found id: ""
	I1213 10:08:53.841180  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.841189  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:53.841195  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:53.841251  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:53.869816  285837 cri.go:89] found id: ""
	I1213 10:08:53.869840  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.869850  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:53.869856  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:53.869916  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:53.893754  285837 cri.go:89] found id: ""
	I1213 10:08:53.893781  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.893789  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:53.893796  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:53.893856  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:53.917859  285837 cri.go:89] found id: ""
	I1213 10:08:53.917881  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.917890  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:53.917896  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:53.917957  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:53.941859  285837 cri.go:89] found id: ""
	I1213 10:08:53.941886  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.941895  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:53.941902  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:53.941964  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:53.969296  285837 cri.go:89] found id: ""
	I1213 10:08:53.969320  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.969329  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:53.969335  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:53.969392  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:53.993419  285837 cri.go:89] found id: ""
	I1213 10:08:53.993448  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.993458  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:53.993464  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:53.993520  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:54.026047  285837 cri.go:89] found id: ""
	I1213 10:08:54.026074  285837 logs.go:282] 0 containers: []
	W1213 10:08:54.026084  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:54.026094  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:54.026106  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:54.042132  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:54.042160  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:54.121343  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:54.112495    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.113229    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115109    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115795    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.117272    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:54.112495    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.113229    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115109    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115795    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.117272    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:54.121416  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:54.121439  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:54.146468  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:54.146502  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:54.173087  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:54.173114  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:56.730884  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:56.741016  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:56.741083  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:56.765437  285837 cri.go:89] found id: ""
	I1213 10:08:56.765461  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.765470  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:56.765476  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:56.765535  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:56.804701  285837 cri.go:89] found id: ""
	I1213 10:08:56.804725  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.804734  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:56.804740  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:56.804796  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:56.831548  285837 cri.go:89] found id: ""
	I1213 10:08:56.831573  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.831582  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:56.831588  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:56.831646  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:56.860131  285837 cri.go:89] found id: ""
	I1213 10:08:56.860154  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.860162  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:56.860169  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:56.860223  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:56.884508  285837 cri.go:89] found id: ""
	I1213 10:08:56.884532  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.884540  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:56.884547  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:56.884602  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:56.909197  285837 cri.go:89] found id: ""
	I1213 10:08:56.909223  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.909232  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:56.909238  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:56.909296  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:56.934089  285837 cri.go:89] found id: ""
	I1213 10:08:56.934110  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.934119  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:56.934126  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:56.934183  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:56.958725  285837 cri.go:89] found id: ""
	I1213 10:08:56.958745  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.958754  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:56.958764  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:56.958775  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:57.027824  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:57.017888    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.018557    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.020435    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.021275    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.023085    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:57.017888    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.018557    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.020435    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.021275    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.023085    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:57.027846  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:57.027859  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:57.054139  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:57.054169  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:57.085873  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:57.085903  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:57.144978  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:57.145011  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:59.659171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:59.669569  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:59.669639  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:59.695058  285837 cri.go:89] found id: ""
	I1213 10:08:59.695123  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.695146  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:59.695163  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:59.695255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:59.720734  285837 cri.go:89] found id: ""
	I1213 10:08:59.720799  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.720822  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:59.720840  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:59.720935  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:59.744586  285837 cri.go:89] found id: ""
	I1213 10:08:59.744661  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.744684  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:59.744698  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:59.744770  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:59.771374  285837 cri.go:89] found id: ""
	I1213 10:08:59.771408  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.771417  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:59.771439  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:59.771541  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:59.799406  285837 cri.go:89] found id: ""
	I1213 10:08:59.799441  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.799450  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:59.799473  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:59.799577  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:59.828067  285837 cri.go:89] found id: ""
	I1213 10:08:59.828142  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.828165  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:59.828187  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:59.828255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:59.853064  285837 cri.go:89] found id: ""
	I1213 10:08:59.853130  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.853152  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:59.853174  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:59.853238  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:59.881735  285837 cri.go:89] found id: ""
	I1213 10:08:59.881772  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.881781  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:59.881790  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:59.881820  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:59.909551  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:59.909578  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:59.965746  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:59.965781  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:59.979378  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:59.979407  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:00.187890  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:00.174278    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.176237    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.177484    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.179716    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.181016    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:00.174278    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.176237    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.177484    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.179716    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.181016    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:00.187915  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:00.187930  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:02.742568  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:02.753251  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:02.753340  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:02.786726  285837 cri.go:89] found id: ""
	I1213 10:09:02.786749  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.786758  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:02.786764  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:02.786823  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:02.819145  285837 cri.go:89] found id: ""
	I1213 10:09:02.819166  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.819174  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:02.819193  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:02.819251  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:02.847100  285837 cri.go:89] found id: ""
	I1213 10:09:02.847124  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.847133  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:02.847139  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:02.847202  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:02.873292  285837 cri.go:89] found id: ""
	I1213 10:09:02.873316  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.873325  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:02.873332  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:02.873388  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:02.897520  285837 cri.go:89] found id: ""
	I1213 10:09:02.897544  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.897553  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:02.897560  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:02.897617  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:02.922393  285837 cri.go:89] found id: ""
	I1213 10:09:02.922416  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.922425  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:02.922431  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:02.922490  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:02.947241  285837 cri.go:89] found id: ""
	I1213 10:09:02.947264  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.947272  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:02.947278  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:02.947335  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:02.972679  285837 cri.go:89] found id: ""
	I1213 10:09:02.972704  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.972713  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:02.972722  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:02.972733  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:03.034867  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:03.034909  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:03.052540  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:03.052570  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:03.128351  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:03.118813    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.119444    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121123    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121418    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.124196    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:03.118813    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.119444    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121123    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121418    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.124196    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:03.128373  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:03.128386  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:03.154970  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:03.155008  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:05.683571  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:05.693787  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:05.693854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:05.718259  285837 cri.go:89] found id: ""
	I1213 10:09:05.718282  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.718291  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:05.718297  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:05.718357  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:05.745891  285837 cri.go:89] found id: ""
	I1213 10:09:05.745915  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.745924  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:05.745931  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:05.745987  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:05.782435  285837 cri.go:89] found id: ""
	I1213 10:09:05.782460  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.782469  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:05.782475  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:05.782530  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:05.814908  285837 cri.go:89] found id: ""
	I1213 10:09:05.814951  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.814962  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:05.814969  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:05.815039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:05.841933  285837 cri.go:89] found id: ""
	I1213 10:09:05.841961  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.841971  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:05.841978  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:05.842039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:05.866012  285837 cri.go:89] found id: ""
	I1213 10:09:05.866041  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.866050  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:05.866056  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:05.866115  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:05.890279  285837 cri.go:89] found id: ""
	I1213 10:09:05.890307  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.890315  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:05.890322  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:05.890379  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:05.915405  285837 cri.go:89] found id: ""
	I1213 10:09:05.915428  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.915436  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:05.915446  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:05.915457  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:05.971454  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:05.971486  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:05.984906  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:05.984951  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:06.083616  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:06.064999    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.075679    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.076130    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.077726    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.078044    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:06.064999    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.075679    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.076130    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.077726    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.078044    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:06.083701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:06.083737  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:06.114405  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:06.114443  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:08.641977  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:08.652131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:08.652197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:08.675938  285837 cri.go:89] found id: ""
	I1213 10:09:08.675961  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.675970  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:08.675976  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:08.676038  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:08.702206  285837 cri.go:89] found id: ""
	I1213 10:09:08.702281  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.702304  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:08.702321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:08.702400  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:08.726527  285837 cri.go:89] found id: ""
	I1213 10:09:08.726599  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.726621  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:08.726639  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:08.726726  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:08.751396  285837 cri.go:89] found id: ""
	I1213 10:09:08.751469  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.751492  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:08.751555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:08.751631  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:08.787796  285837 cri.go:89] found id: ""
	I1213 10:09:08.787828  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.787838  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:08.787844  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:08.787908  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:08.819599  285837 cri.go:89] found id: ""
	I1213 10:09:08.819634  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.819643  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:08.819650  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:08.819717  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:08.846345  285837 cri.go:89] found id: ""
	I1213 10:09:08.846372  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.846381  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:08.846387  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:08.846445  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:08.870594  285837 cri.go:89] found id: ""
	I1213 10:09:08.870664  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.870710  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:08.870746  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:08.870797  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:08.928780  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:08.928814  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:08.944017  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:08.944043  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:09.014860  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:09.004450    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.005493    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008097    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008783    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.010658    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:09.004450    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.005493    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008097    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008783    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.010658    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:09.014883  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:09.014896  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:09.047081  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:09.047174  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:11.588198  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:11.600902  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:11.600973  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:11.629261  285837 cri.go:89] found id: ""
	I1213 10:09:11.629286  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.629295  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:11.629301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:11.629362  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:11.653238  285837 cri.go:89] found id: ""
	I1213 10:09:11.653260  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.653269  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:11.653275  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:11.653332  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:11.681922  285837 cri.go:89] found id: ""
	I1213 10:09:11.681946  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.681956  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:11.681962  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:11.682019  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:11.711733  285837 cri.go:89] found id: ""
	I1213 10:09:11.711762  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.711770  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:11.711776  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:11.711834  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:11.736582  285837 cri.go:89] found id: ""
	I1213 10:09:11.736608  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.736616  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:11.736625  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:11.736681  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:11.759927  285837 cri.go:89] found id: ""
	I1213 10:09:11.759951  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.759961  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:11.759967  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:11.760022  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:11.794760  285837 cri.go:89] found id: ""
	I1213 10:09:11.794787  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.794797  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:11.794803  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:11.794862  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:11.822009  285837 cri.go:89] found id: ""
	I1213 10:09:11.822037  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.822047  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:11.822056  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:11.822068  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:11.889206  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:11.881444    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882052    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882987    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.883619    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.885240    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:11.881444    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882052    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882987    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.883619    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.885240    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:11.889228  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:11.889241  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:11.914544  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:11.914576  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:11.944548  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:11.944576  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:12.000427  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:12.000460  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:14.516876  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:14.527580  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:14.527657  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:14.551881  285837 cri.go:89] found id: ""
	I1213 10:09:14.551903  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.551911  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:14.551917  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:14.551977  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:14.576244  285837 cri.go:89] found id: ""
	I1213 10:09:14.576267  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.576275  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:14.576281  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:14.576337  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:14.604979  285837 cri.go:89] found id: ""
	I1213 10:09:14.605002  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.605011  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:14.605017  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:14.605084  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:14.633024  285837 cri.go:89] found id: ""
	I1213 10:09:14.633050  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.633059  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:14.633065  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:14.633123  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:14.661288  285837 cri.go:89] found id: ""
	I1213 10:09:14.661316  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.661324  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:14.661331  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:14.661390  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:14.686665  285837 cri.go:89] found id: ""
	I1213 10:09:14.686694  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.686704  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:14.686711  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:14.686769  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:14.712111  285837 cri.go:89] found id: ""
	I1213 10:09:14.712139  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.712148  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:14.712156  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:14.712212  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:14.740346  285837 cri.go:89] found id: ""
	I1213 10:09:14.740392  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.740401  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:14.740410  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:14.740423  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:14.753460  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:14.753488  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:14.834789  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:14.826269    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.827206    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.828874    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.829190    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.830588    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:14.826269    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.827206    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.828874    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.829190    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.830588    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:14.834812  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:14.834824  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:14.859634  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:14.859666  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:14.890753  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:14.890826  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:17.450898  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:17.461075  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:17.461145  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:17.486593  285837 cri.go:89] found id: ""
	I1213 10:09:17.486616  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.486625  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:17.486632  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:17.486689  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:17.511138  285837 cri.go:89] found id: ""
	I1213 10:09:17.511214  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.511230  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:17.511237  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:17.511302  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:17.535780  285837 cri.go:89] found id: ""
	I1213 10:09:17.535808  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.535818  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:17.535824  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:17.535879  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:17.559884  285837 cri.go:89] found id: ""
	I1213 10:09:17.559907  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.559916  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:17.559922  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:17.559983  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:17.588420  285837 cri.go:89] found id: ""
	I1213 10:09:17.588446  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.588456  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:17.588462  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:17.588520  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:17.616357  285837 cri.go:89] found id: ""
	I1213 10:09:17.616427  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.616450  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:17.616470  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:17.616553  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:17.640411  285837 cri.go:89] found id: ""
	I1213 10:09:17.640481  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.640506  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:17.640525  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:17.640606  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:17.670821  285837 cri.go:89] found id: ""
	I1213 10:09:17.670887  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.670910  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:17.670931  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:17.670976  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:17.730483  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:17.730517  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:17.743937  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:17.743965  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:17.835718  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:17.824527    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.825248    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.826849    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.827326    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.828934    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:17.824527    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.825248    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.826849    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.827326    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.828934    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:17.835789  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:17.835817  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:17.865207  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:17.865241  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:20.392780  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:20.403097  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:20.403162  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:20.428028  285837 cri.go:89] found id: ""
	I1213 10:09:20.428060  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.428069  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:20.428076  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:20.428141  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:20.452273  285837 cri.go:89] found id: ""
	I1213 10:09:20.452297  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.452305  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:20.452312  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:20.452375  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:20.476828  285837 cri.go:89] found id: ""
	I1213 10:09:20.476852  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.476860  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:20.476866  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:20.476922  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:20.500929  285837 cri.go:89] found id: ""
	I1213 10:09:20.500952  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.500968  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:20.500975  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:20.501033  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:20.528180  285837 cri.go:89] found id: ""
	I1213 10:09:20.528207  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.528217  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:20.528223  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:20.528284  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:20.553290  285837 cri.go:89] found id: ""
	I1213 10:09:20.553314  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.553323  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:20.553330  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:20.553386  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:20.577422  285837 cri.go:89] found id: ""
	I1213 10:09:20.577446  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.577455  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:20.577464  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:20.577518  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:20.601597  285837 cri.go:89] found id: ""
	I1213 10:09:20.601623  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.601632  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:20.601643  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:20.601654  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:20.656521  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:20.656556  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:20.669890  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:20.669920  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:20.737784  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:20.729553    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.730242    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.731915    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.732434    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.734060    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:20.729553    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.730242    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.731915    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.732434    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.734060    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:20.737806  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:20.737818  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:20.762811  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:20.762845  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:23.299625  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:23.311059  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:23.311129  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:23.338174  285837 cri.go:89] found id: ""
	I1213 10:09:23.338197  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.338205  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:23.338211  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:23.338269  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:23.363653  285837 cri.go:89] found id: ""
	I1213 10:09:23.363674  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.363683  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:23.363688  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:23.363750  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:23.387166  285837 cri.go:89] found id: ""
	I1213 10:09:23.387187  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.387195  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:23.387201  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:23.387257  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:23.411627  285837 cri.go:89] found id: ""
	I1213 10:09:23.411650  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.411659  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:23.411665  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:23.411731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:23.440839  285837 cri.go:89] found id: ""
	I1213 10:09:23.440866  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.440885  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:23.440892  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:23.440950  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:23.464835  285837 cri.go:89] found id: ""
	I1213 10:09:23.464857  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.464866  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:23.464872  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:23.464927  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:23.489635  285837 cri.go:89] found id: ""
	I1213 10:09:23.489659  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.489668  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:23.489675  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:23.489762  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:23.513816  285837 cri.go:89] found id: ""
	I1213 10:09:23.513847  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.513855  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:23.513865  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:23.513875  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:23.539139  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:23.539173  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:23.565435  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:23.565463  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:23.622023  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:23.622058  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:23.635231  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:23.635263  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:23.699057  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:23.690976    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.691550    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693329    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693735    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.695223    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:23.690976    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.691550    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693329    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693735    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.695223    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:26.200117  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:26.210617  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:26.210696  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:26.235048  285837 cri.go:89] found id: ""
	I1213 10:09:26.235076  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.235085  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:26.235092  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:26.235148  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:26.259259  285837 cri.go:89] found id: ""
	I1213 10:09:26.259285  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.259294  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:26.259300  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:26.259355  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:26.291742  285837 cri.go:89] found id: ""
	I1213 10:09:26.291767  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.291776  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:26.291782  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:26.291864  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:26.320200  285837 cri.go:89] found id: ""
	I1213 10:09:26.320225  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.320234  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:26.320240  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:26.320296  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:26.347996  285837 cri.go:89] found id: ""
	I1213 10:09:26.348023  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.348033  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:26.348039  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:26.348097  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:26.376752  285837 cri.go:89] found id: ""
	I1213 10:09:26.376816  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.376830  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:26.376837  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:26.376893  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:26.404777  285837 cri.go:89] found id: ""
	I1213 10:09:26.404802  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.404811  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:26.404817  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:26.404876  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:26.428882  285837 cri.go:89] found id: ""
	I1213 10:09:26.428904  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.428913  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:26.428922  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:26.428933  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:26.489455  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:26.489494  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:26.504291  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:26.504320  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:26.573661  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:26.564906    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.565725    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567441    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567990    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.569686    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:26.564906    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.565725    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567441    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567990    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.569686    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:26.573684  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:26.573698  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:26.599463  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:26.599496  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:29.127681  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:29.138010  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:29.138081  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:29.161918  285837 cri.go:89] found id: ""
	I1213 10:09:29.161989  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.162013  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:29.162031  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:29.162114  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:29.186603  285837 cri.go:89] found id: ""
	I1213 10:09:29.186678  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.186700  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:29.186717  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:29.186798  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:29.210425  285837 cri.go:89] found id: ""
	I1213 10:09:29.210489  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.210512  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:29.210529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:29.210614  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:29.237345  285837 cri.go:89] found id: ""
	I1213 10:09:29.237369  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.237377  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:29.237384  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:29.237440  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:29.260918  285837 cri.go:89] found id: ""
	I1213 10:09:29.260997  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.261013  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:29.261020  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:29.261075  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:29.289712  285837 cri.go:89] found id: ""
	I1213 10:09:29.289738  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.289747  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:29.289753  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:29.289808  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:29.321797  285837 cri.go:89] found id: ""
	I1213 10:09:29.321821  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.321831  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:29.321839  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:29.321895  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:29.353498  285837 cri.go:89] found id: ""
	I1213 10:09:29.353523  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.353532  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:29.353542  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:29.353582  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:29.415160  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:29.407188    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.407994    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409598    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409900    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.411394    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:29.407188    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.407994    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409598    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409900    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.411394    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:29.415183  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:29.415198  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:29.440924  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:29.440961  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:29.468916  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:29.468944  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:29.528468  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:29.528501  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:32.042457  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:32.054480  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:32.054563  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:32.088256  285837 cri.go:89] found id: ""
	I1213 10:09:32.088282  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.088290  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:32.088296  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:32.088382  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:32.114080  285837 cri.go:89] found id: ""
	I1213 10:09:32.114102  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.114110  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:32.114116  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:32.114195  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:32.138708  285837 cri.go:89] found id: ""
	I1213 10:09:32.138732  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.138740  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:32.138746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:32.138851  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:32.163676  285837 cri.go:89] found id: ""
	I1213 10:09:32.163706  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.163715  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:32.163721  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:32.163780  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:32.188486  285837 cri.go:89] found id: ""
	I1213 10:09:32.188565  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.188582  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:32.188589  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:32.188652  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:32.212912  285837 cri.go:89] found id: ""
	I1213 10:09:32.212936  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.212945  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:32.212951  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:32.213034  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:32.242067  285837 cri.go:89] found id: ""
	I1213 10:09:32.242090  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.242099  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:32.242106  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:32.242163  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:32.280832  285837 cri.go:89] found id: ""
	I1213 10:09:32.280855  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.280864  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:32.280874  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:32.280885  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:32.344925  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:32.344963  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:32.359370  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:32.359400  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:32.425438  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:32.416924    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.417557    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419154    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419748    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.421439    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:32.416924    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.417557    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419154    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419748    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.421439    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:32.425459  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:32.425472  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:32.449956  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:32.449990  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:34.978245  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:34.989159  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:34.989236  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:35.017235  285837 cri.go:89] found id: ""
	I1213 10:09:35.017258  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.017267  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:35.017273  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:35.017341  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:35.050437  285837 cri.go:89] found id: ""
	I1213 10:09:35.050458  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.050467  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:35.050473  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:35.050529  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:35.085905  285837 cri.go:89] found id: ""
	I1213 10:09:35.085926  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.085935  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:35.085941  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:35.085994  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:35.118261  285837 cri.go:89] found id: ""
	I1213 10:09:35.118283  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.118292  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:35.118299  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:35.118360  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:35.144531  285837 cri.go:89] found id: ""
	I1213 10:09:35.144555  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.144563  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:35.144569  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:35.144627  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:35.170241  285837 cri.go:89] found id: ""
	I1213 10:09:35.170317  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.170340  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:35.170359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:35.170433  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:35.195958  285837 cri.go:89] found id: ""
	I1213 10:09:35.195986  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.195995  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:35.196001  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:35.196066  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:35.220509  285837 cri.go:89] found id: ""
	I1213 10:09:35.220535  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.220544  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:35.220553  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:35.220563  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:35.276863  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:35.277042  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:35.294239  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:35.294265  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:35.367085  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:35.358803    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.359461    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361052    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361604    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.363156    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:35.358803    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.359461    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361052    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361604    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.363156    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:35.367108  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:35.367121  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:35.392804  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:35.392842  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:37.919692  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:37.929805  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:37.929875  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:37.954708  285837 cri.go:89] found id: ""
	I1213 10:09:37.954782  285837 logs.go:282] 0 containers: []
	W1213 10:09:37.954806  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:37.954825  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:37.954914  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:37.979259  285837 cri.go:89] found id: ""
	I1213 10:09:37.979332  285837 logs.go:282] 0 containers: []
	W1213 10:09:37.979357  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:37.979375  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:37.979459  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:38.008473  285837 cri.go:89] found id: ""
	I1213 10:09:38.008554  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.008579  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:38.008597  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:38.008695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:38.051746  285837 cri.go:89] found id: ""
	I1213 10:09:38.051820  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.051843  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:38.051863  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:38.051957  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:38.082373  285837 cri.go:89] found id: ""
	I1213 10:09:38.082405  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.082413  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:38.082419  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:38.082477  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:38.109623  285837 cri.go:89] found id: ""
	I1213 10:09:38.109646  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.109655  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:38.109661  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:38.109718  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:38.133779  285837 cri.go:89] found id: ""
	I1213 10:09:38.133807  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.133815  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:38.133822  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:38.133892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:38.158199  285837 cri.go:89] found id: ""
	I1213 10:09:38.158263  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.158286  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:38.158338  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:38.158371  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:38.171856  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:38.171885  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:38.237998  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:38.230230    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.230727    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232222    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232668    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.234110    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:38.230230    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.230727    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232222    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232668    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.234110    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:38.238021  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:38.238033  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:38.263694  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:38.263729  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:38.301569  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:38.301594  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:40.863927  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:40.874647  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:40.874715  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:40.902898  285837 cri.go:89] found id: ""
	I1213 10:09:40.902922  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.902931  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:40.902939  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:40.903000  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:40.928251  285837 cri.go:89] found id: ""
	I1213 10:09:40.928277  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.928287  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:40.928294  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:40.928350  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:40.952178  285837 cri.go:89] found id: ""
	I1213 10:09:40.952201  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.952210  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:40.952216  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:40.952271  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:40.980522  285837 cri.go:89] found id: ""
	I1213 10:09:40.980548  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.980557  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:40.980564  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:40.980620  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:41.007391  285837 cri.go:89] found id: ""
	I1213 10:09:41.007417  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.007427  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:41.007433  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:41.007498  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:41.056690  285837 cri.go:89] found id: ""
	I1213 10:09:41.056762  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.056786  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:41.056806  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:41.056892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:41.082372  285837 cri.go:89] found id: ""
	I1213 10:09:41.082443  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.082481  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:41.082505  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:41.082592  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:41.106556  285837 cri.go:89] found id: ""
	I1213 10:09:41.106626  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.106648  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:41.106680  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:41.106722  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:41.162248  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:41.162281  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:41.175724  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:41.175753  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:41.243327  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:41.234749    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.235634    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237273    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237827    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.239427    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:41.234749    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.235634    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237273    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237827    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.239427    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:41.243393  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:41.243420  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:41.269060  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:41.269142  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:43.812670  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:43.823281  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:43.823360  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:43.846549  285837 cri.go:89] found id: ""
	I1213 10:09:43.846571  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.846579  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:43.846585  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:43.846640  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:43.879456  285837 cri.go:89] found id: ""
	I1213 10:09:43.879541  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.879557  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:43.879563  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:43.879632  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:43.904717  285837 cri.go:89] found id: ""
	I1213 10:09:43.904745  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.904755  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:43.904761  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:43.904818  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:43.929847  285837 cri.go:89] found id: ""
	I1213 10:09:43.929873  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.929883  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:43.929890  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:43.929950  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:43.954073  285837 cri.go:89] found id: ""
	I1213 10:09:43.954146  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.954168  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:43.954187  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:43.954278  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:43.979175  285837 cri.go:89] found id: ""
	I1213 10:09:43.979257  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.979280  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:43.979299  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:43.979406  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:44.013549  285837 cri.go:89] found id: ""
	I1213 10:09:44.013574  285837 logs.go:282] 0 containers: []
	W1213 10:09:44.013584  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:44.013590  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:44.013653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:44.043145  285837 cri.go:89] found id: ""
	I1213 10:09:44.043222  285837 logs.go:282] 0 containers: []
	W1213 10:09:44.043244  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:44.043267  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:44.043306  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:44.058657  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:44.058685  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:44.137763  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:44.128648    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.129161    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.130836    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.131881    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.132537    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:44.128648    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.129161    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.130836    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.131881    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.132537    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:44.137786  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:44.137799  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:44.163596  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:44.163630  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:44.193981  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:44.194008  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:46.751860  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:46.762578  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:46.762653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:46.787138  285837 cri.go:89] found id: ""
	I1213 10:09:46.787161  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.787170  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:46.787176  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:46.787234  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:46.812348  285837 cri.go:89] found id: ""
	I1213 10:09:46.812371  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.812379  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:46.812386  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:46.812445  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:46.840689  285837 cri.go:89] found id: ""
	I1213 10:09:46.840712  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.840721  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:46.840727  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:46.840784  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:46.870288  285837 cri.go:89] found id: ""
	I1213 10:09:46.870313  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.870322  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:46.870328  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:46.870450  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:46.896231  285837 cri.go:89] found id: ""
	I1213 10:09:46.896255  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.896269  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:46.896276  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:46.896334  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:46.921572  285837 cri.go:89] found id: ""
	I1213 10:09:46.921604  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.921613  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:46.921636  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:46.921721  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:46.948191  285837 cri.go:89] found id: ""
	I1213 10:09:46.948220  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.948229  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:46.948236  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:46.948365  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:46.977518  285837 cri.go:89] found id: ""
	I1213 10:09:46.977585  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.977602  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:46.977612  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:46.977624  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:47.034861  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:47.034901  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:47.049608  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:47.049638  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:47.120624  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:47.111262    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.111986    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.113725    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.114377    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.116185    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:47.111262    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.111986    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.113725    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.114377    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.116185    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:47.120648  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:47.120662  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:47.146083  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:47.146118  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:49.676188  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:49.688330  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:49.688400  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:49.714933  285837 cri.go:89] found id: ""
	I1213 10:09:49.714958  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.714967  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:49.714973  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:49.715035  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:49.739883  285837 cri.go:89] found id: ""
	I1213 10:09:49.739912  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.739923  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:49.739931  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:49.739990  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:49.768673  285837 cri.go:89] found id: ""
	I1213 10:09:49.768699  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.768718  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:49.768726  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:49.768788  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:49.794628  285837 cri.go:89] found id: ""
	I1213 10:09:49.794694  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.794717  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:49.794735  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:49.794822  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:49.819205  285837 cri.go:89] found id: ""
	I1213 10:09:49.819237  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.819247  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:49.819253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:49.819318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:49.843189  285837 cri.go:89] found id: ""
	I1213 10:09:49.843212  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.843228  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:49.843235  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:49.843303  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:49.867965  285837 cri.go:89] found id: ""
	I1213 10:09:49.867998  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.868008  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:49.868016  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:49.868089  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:49.891561  285837 cri.go:89] found id: ""
	I1213 10:09:49.891586  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.891595  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:49.891605  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:49.891629  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:49.953785  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:49.953824  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:49.967425  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:49.967453  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:50.041318  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:50.031798    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033081    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033890    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035607    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035911    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:50.031798    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033081    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033890    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035607    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035911    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:50.041391  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:50.041419  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:50.070955  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:50.071029  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:52.603479  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:52.615038  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:52.615113  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:52.643538  285837 cri.go:89] found id: ""
	I1213 10:09:52.643561  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.643570  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:52.643577  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:52.643636  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:52.668477  285837 cri.go:89] found id: ""
	I1213 10:09:52.668514  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.668523  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:52.668530  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:52.668586  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:52.695551  285837 cri.go:89] found id: ""
	I1213 10:09:52.695574  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.695582  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:52.695589  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:52.695647  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:52.723965  285837 cri.go:89] found id: ""
	I1213 10:09:52.723991  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.724000  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:52.724007  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:52.724061  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:52.748159  285837 cri.go:89] found id: ""
	I1213 10:09:52.748186  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.748195  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:52.748202  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:52.748257  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:52.771805  285837 cri.go:89] found id: ""
	I1213 10:09:52.771836  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.771846  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:52.771853  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:52.771910  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:52.795549  285837 cri.go:89] found id: ""
	I1213 10:09:52.795574  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.795584  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:52.795590  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:52.795650  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:52.819748  285837 cri.go:89] found id: ""
	I1213 10:09:52.819775  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.819785  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:52.819794  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:52.819805  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:52.882031  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:52.873734    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.874381    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876013    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876486    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.878274    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:52.873734    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.874381    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876013    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876486    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.878274    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:52.882051  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:52.882062  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:52.907759  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:52.907795  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:52.934360  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:52.934390  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:52.989946  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:52.989982  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:55.503671  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:55.514125  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:55.514196  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:55.540594  285837 cri.go:89] found id: ""
	I1213 10:09:55.540621  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.540631  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:55.540637  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:55.540694  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:55.570352  285837 cri.go:89] found id: ""
	I1213 10:09:55.570378  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.570387  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:55.570395  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:55.570450  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:55.596509  285837 cri.go:89] found id: ""
	I1213 10:09:55.596533  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.596541  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:55.596547  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:55.596604  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:55.622553  285837 cri.go:89] found id: ""
	I1213 10:09:55.622579  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.622587  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:55.622593  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:55.622650  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:55.647770  285837 cri.go:89] found id: ""
	I1213 10:09:55.647794  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.647803  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:55.647809  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:55.647874  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:55.672615  285837 cri.go:89] found id: ""
	I1213 10:09:55.672679  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.672693  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:55.672701  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:55.672756  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:55.697017  285837 cri.go:89] found id: ""
	I1213 10:09:55.697041  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.697050  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:55.697063  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:55.697123  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:55.720795  285837 cri.go:89] found id: ""
	I1213 10:09:55.720866  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.720891  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:55.720914  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:55.720950  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:55.745823  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:55.745857  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:55.774634  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:55.774663  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:55.830064  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:55.830098  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:55.843868  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:55.843896  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:55.905758  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:55.897123    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.897694    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899210    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899757    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.901522    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:55.897123    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.897694    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899210    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899757    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.901522    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:58.406072  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:58.418120  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:58.418199  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:58.443021  285837 cri.go:89] found id: ""
	I1213 10:09:58.443050  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.443059  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:58.443066  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:58.443126  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:58.468115  285837 cri.go:89] found id: ""
	I1213 10:09:58.468139  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.468147  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:58.468154  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:58.468214  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:58.496991  285837 cri.go:89] found id: ""
	I1213 10:09:58.497015  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.497025  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:58.497032  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:58.497098  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:58.530053  285837 cri.go:89] found id: ""
	I1213 10:09:58.530076  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.530085  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:58.530091  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:58.530149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:58.561990  285837 cri.go:89] found id: ""
	I1213 10:09:58.562013  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.562022  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:58.562028  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:58.562091  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:58.595912  285837 cri.go:89] found id: ""
	I1213 10:09:58.595984  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.596007  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:58.596026  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:58.596113  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:58.626521  285837 cri.go:89] found id: ""
	I1213 10:09:58.626593  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.626616  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:58.626635  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:58.626720  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:58.655898  285837 cri.go:89] found id: ""
	I1213 10:09:58.655963  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.655987  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:58.656008  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:58.656032  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:58.711709  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:58.711741  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:58.726942  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:58.726969  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:58.798293  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:58.790256    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.791061    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.792601    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.793006    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.794454    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:58.790256    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.791061    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.792601    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.793006    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.794454    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:58.798314  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:58.798327  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:58.822936  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:58.822973  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:01.351670  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:01.362442  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:01.362517  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:01.388700  285837 cri.go:89] found id: ""
	I1213 10:10:01.388734  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.388744  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:01.388751  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:01.388824  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:01.418393  285837 cri.go:89] found id: ""
	I1213 10:10:01.418471  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.418496  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:01.418515  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:01.418602  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:01.449860  285837 cri.go:89] found id: ""
	I1213 10:10:01.449937  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.449962  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:01.449980  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:01.450064  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:01.475973  285837 cri.go:89] found id: ""
	I1213 10:10:01.476035  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.476049  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:01.476056  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:01.476118  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:01.501452  285837 cri.go:89] found id: ""
	I1213 10:10:01.501474  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.501499  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:01.501506  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:01.501576  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:01.527738  285837 cri.go:89] found id: ""
	I1213 10:10:01.527808  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.527832  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:01.527852  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:01.527946  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:01.553256  285837 cri.go:89] found id: ""
	I1213 10:10:01.553280  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.553289  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:01.553296  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:01.553354  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:01.578833  285837 cri.go:89] found id: ""
	I1213 10:10:01.578855  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.578864  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:01.578875  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:01.578892  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:01.634755  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:01.634790  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:01.649799  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:01.649832  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:01.721470  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:01.711679    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.712530    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715342    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715994    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.717520    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:01.711679    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.712530    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715342    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715994    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.717520    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:01.721491  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:01.721504  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:01.747322  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:01.747357  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:04.288307  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:04.300683  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:04.300805  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:04.332215  285837 cri.go:89] found id: ""
	I1213 10:10:04.332242  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.332252  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:04.332259  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:04.332318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:04.358136  285837 cri.go:89] found id: ""
	I1213 10:10:04.358164  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.358173  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:04.358180  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:04.358248  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:04.383446  285837 cri.go:89] found id: ""
	I1213 10:10:04.383479  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.383488  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:04.383493  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:04.383578  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:04.408888  285837 cri.go:89] found id: ""
	I1213 10:10:04.408914  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.408923  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:04.408930  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:04.409009  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:04.438109  285837 cri.go:89] found id: ""
	I1213 10:10:04.438145  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.438155  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:04.438163  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:04.438233  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:04.462623  285837 cri.go:89] found id: ""
	I1213 10:10:04.462692  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.462725  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:04.462745  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:04.462826  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:04.488102  285837 cri.go:89] found id: ""
	I1213 10:10:04.488127  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.488137  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:04.488143  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:04.488230  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:04.515038  285837 cri.go:89] found id: ""
	I1213 10:10:04.515078  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.515087  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:04.515096  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:04.515134  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:04.540448  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:04.540483  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:04.570913  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:04.570942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:04.626396  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:04.626430  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:04.639908  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:04.639938  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:04.704410  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:04.696311    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.697396    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.698596    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.699329    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.700060    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:04.696311    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.697396    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.698596    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.699329    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.700060    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:07.204629  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:07.215001  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:07.215080  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:07.239145  285837 cri.go:89] found id: ""
	I1213 10:10:07.239170  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.239180  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:07.239186  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:07.239243  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:07.263051  285837 cri.go:89] found id: ""
	I1213 10:10:07.263077  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.263086  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:07.263092  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:07.263149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:07.293024  285837 cri.go:89] found id: ""
	I1213 10:10:07.293051  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.293060  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:07.293066  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:07.293142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:07.320096  285837 cri.go:89] found id: ""
	I1213 10:10:07.320119  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.320128  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:07.320133  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:07.320189  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:07.349635  285837 cri.go:89] found id: ""
	I1213 10:10:07.349661  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.349670  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:07.349676  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:07.349733  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:07.374644  285837 cri.go:89] found id: ""
	I1213 10:10:07.374720  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.374744  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:07.374767  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:07.374875  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:07.399088  285837 cri.go:89] found id: ""
	I1213 10:10:07.399108  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.399117  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:07.399123  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:07.399179  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:07.423187  285837 cri.go:89] found id: ""
	I1213 10:10:07.423210  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.423219  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:07.423229  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:07.423244  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:07.478648  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:07.478682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:07.492218  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:07.492247  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:07.558077  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:07.550399    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.550906    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552488    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552905    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.554328    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:07.550399    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.550906    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552488    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552905    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.554328    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:07.558147  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:07.558168  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:07.583061  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:07.583093  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:10.116593  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:10.127456  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:10.127551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:10.157660  285837 cri.go:89] found id: ""
	I1213 10:10:10.157684  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.157693  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:10.157699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:10.157758  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:10.183132  285837 cri.go:89] found id: ""
	I1213 10:10:10.183166  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.183175  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:10.183181  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:10.183248  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:10.209615  285837 cri.go:89] found id: ""
	I1213 10:10:10.209681  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.209704  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:10.209723  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:10.209817  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:10.234760  285837 cri.go:89] found id: ""
	I1213 10:10:10.234789  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.234798  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:10.234804  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:10.234877  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:10.261577  285837 cri.go:89] found id: ""
	I1213 10:10:10.261608  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.261618  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:10.261624  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:10.261682  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:10.289616  285837 cri.go:89] found id: ""
	I1213 10:10:10.289655  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.289664  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:10.289670  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:10.289742  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:10.316640  285837 cri.go:89] found id: ""
	I1213 10:10:10.316684  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.316693  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:10.316699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:10.316768  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:10.346038  285837 cri.go:89] found id: ""
	I1213 10:10:10.346065  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.346074  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:10.346084  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:10.346095  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:10.377589  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:10.377669  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:10.435680  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:10.435714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:10.449198  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:10.449226  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:10.521596  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:10.513247    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.514113    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.515874    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.516173    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.517662    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:10.513247    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.514113    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.515874    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.516173    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.517662    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:10.521619  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:10.521632  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:13.047644  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:13.059744  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:13.059820  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:13.087860  285837 cri.go:89] found id: ""
	I1213 10:10:13.087901  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.087911  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:13.087918  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:13.087983  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:13.112735  285837 cri.go:89] found id: ""
	I1213 10:10:13.112802  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.112844  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:13.112876  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:13.112953  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:13.141197  285837 cri.go:89] found id: ""
	I1213 10:10:13.141223  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.141244  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:13.141255  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:13.141315  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:13.165043  285837 cri.go:89] found id: ""
	I1213 10:10:13.165119  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.165143  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:13.165155  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:13.165240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:13.189664  285837 cri.go:89] found id: ""
	I1213 10:10:13.189746  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.189769  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:13.189782  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:13.189854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:13.213620  285837 cri.go:89] found id: ""
	I1213 10:10:13.213686  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.213709  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:13.213723  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:13.213799  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:13.241644  285837 cri.go:89] found id: ""
	I1213 10:10:13.241667  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.241676  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:13.241728  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:13.241812  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:13.265927  285837 cri.go:89] found id: ""
	I1213 10:10:13.265997  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.266030  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:13.266053  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:13.266079  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:13.293162  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:13.293239  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:13.326250  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:13.326334  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:13.386676  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:13.386710  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:13.400810  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:13.400838  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:13.469704  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:13.461055    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.462337    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.463891    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.464211    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.465542    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:13.461055    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.462337    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.463891    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.464211    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.465542    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:15.969962  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:15.980347  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:15.980492  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:16.010088  285837 cri.go:89] found id: ""
	I1213 10:10:16.010118  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.010127  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:16.010133  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:16.010196  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:16.049187  285837 cri.go:89] found id: ""
	I1213 10:10:16.049209  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.049217  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:16.049223  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:16.049291  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:16.077965  285837 cri.go:89] found id: ""
	I1213 10:10:16.077987  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.077996  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:16.078002  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:16.078058  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:16.108378  285837 cri.go:89] found id: ""
	I1213 10:10:16.108451  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.108474  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:16.108492  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:16.108577  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:16.134213  285837 cri.go:89] found id: ""
	I1213 10:10:16.134235  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.134244  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:16.134250  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:16.134310  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:16.160222  285837 cri.go:89] found id: ""
	I1213 10:10:16.160255  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.160266  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:16.160273  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:16.160343  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:16.188619  285837 cri.go:89] found id: ""
	I1213 10:10:16.188646  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.188655  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:16.188662  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:16.188725  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:16.213285  285837 cri.go:89] found id: ""
	I1213 10:10:16.213358  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.213375  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:16.213387  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:16.213398  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:16.241893  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:16.241922  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:16.298312  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:16.298349  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:16.312327  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:16.312403  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:16.384024  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:16.375836    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.376424    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.377950    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.378379    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.379873    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:16.375836    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.376424    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.377950    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.378379    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.379873    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:16.384050  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:16.384064  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:18.909524  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:18.920391  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:18.920459  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:18.945319  285837 cri.go:89] found id: ""
	I1213 10:10:18.945358  285837 logs.go:282] 0 containers: []
	W1213 10:10:18.945367  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:18.945374  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:18.945431  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:18.968360  285837 cri.go:89] found id: ""
	I1213 10:10:18.968381  285837 logs.go:282] 0 containers: []
	W1213 10:10:18.968390  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:18.968420  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:18.968476  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:18.992303  285837 cri.go:89] found id: ""
	I1213 10:10:18.992324  285837 logs.go:282] 0 containers: []
	W1213 10:10:18.992333  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:18.992339  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:18.992393  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:19.017601  285837 cri.go:89] found id: ""
	I1213 10:10:19.017677  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.017700  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:19.017718  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:19.017814  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:19.057563  285837 cri.go:89] found id: ""
	I1213 10:10:19.057636  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.057672  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:19.057695  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:19.057783  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:19.089906  285837 cri.go:89] found id: ""
	I1213 10:10:19.089929  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.089938  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:19.089944  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:19.090014  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:19.115237  285837 cri.go:89] found id: ""
	I1213 10:10:19.115258  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.115266  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:19.115272  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:19.115351  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:19.140000  285837 cri.go:89] found id: ""
	I1213 10:10:19.140067  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.140090  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:19.140112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:19.140150  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:19.201866  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:19.193342    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.193945    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195407    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195879    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.197366    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:19.193342    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.193945    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195407    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195879    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.197366    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:19.201888  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:19.201900  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:19.227103  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:19.227135  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:19.253635  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:19.253664  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:19.317211  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:19.317245  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:21.835317  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:21.848786  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:21.848905  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:21.873912  285837 cri.go:89] found id: ""
	I1213 10:10:21.873938  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.873947  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:21.873966  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:21.874030  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:21.898927  285837 cri.go:89] found id: ""
	I1213 10:10:21.898948  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.898957  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:21.898963  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:21.899017  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:21.928040  285837 cri.go:89] found id: ""
	I1213 10:10:21.928067  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.928076  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:21.928083  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:21.928139  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:21.952762  285837 cri.go:89] found id: ""
	I1213 10:10:21.952784  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.952793  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:21.952800  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:21.952862  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:21.977394  285837 cri.go:89] found id: ""
	I1213 10:10:21.977421  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.977430  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:21.977437  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:21.977502  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:22.001693  285837 cri.go:89] found id: ""
	I1213 10:10:22.001729  285837 logs.go:282] 0 containers: []
	W1213 10:10:22.001739  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:22.001746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:22.001813  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:22.044074  285837 cri.go:89] found id: ""
	I1213 10:10:22.044111  285837 logs.go:282] 0 containers: []
	W1213 10:10:22.044120  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:22.044126  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:22.044203  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:22.083324  285837 cri.go:89] found id: ""
	I1213 10:10:22.083361  285837 logs.go:282] 0 containers: []
	W1213 10:10:22.083370  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:22.083380  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:22.083392  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:22.152550  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:22.144496    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.145029    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.146843    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.147160    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.148682    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:22.144496    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.145029    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.146843    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.147160    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.148682    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:22.152574  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:22.152590  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:22.177867  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:22.177900  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:22.205266  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:22.205296  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:22.260906  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:22.260942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:24.776001  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:24.787300  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:24.787370  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:24.817822  285837 cri.go:89] found id: ""
	I1213 10:10:24.817967  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.817991  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:24.818032  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:24.818131  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:24.843042  285837 cri.go:89] found id: ""
	I1213 10:10:24.843079  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.843088  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:24.843094  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:24.843160  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:24.866977  285837 cri.go:89] found id: ""
	I1213 10:10:24.867012  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.867022  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:24.867029  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:24.867100  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:24.892141  285837 cri.go:89] found id: ""
	I1213 10:10:24.892167  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.892177  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:24.892183  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:24.892258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:24.922137  285837 cri.go:89] found id: ""
	I1213 10:10:24.922207  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.922230  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:24.922248  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:24.922343  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:24.954689  285837 cri.go:89] found id: ""
	I1213 10:10:24.954720  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.954729  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:24.954736  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:24.954802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:24.979305  285837 cri.go:89] found id: ""
	I1213 10:10:24.979379  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.979400  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:24.979420  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:24.979545  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:25.012112  285837 cri.go:89] found id: ""
	I1213 10:10:25.012139  285837 logs.go:282] 0 containers: []
	W1213 10:10:25.012149  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:25.012163  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:25.012177  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:25.083061  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:25.083100  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:25.100686  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:25.100713  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:25.172319  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:25.162652    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.163028    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166188    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166902    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.168493    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:25.162652    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.163028    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166188    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166902    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.168493    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:25.172341  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:25.172354  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:25.198195  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:25.198230  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:27.728458  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:27.739147  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:27.739212  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:27.768935  285837 cri.go:89] found id: ""
	I1213 10:10:27.768964  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.768973  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:27.768980  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:27.769069  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:27.793269  285837 cri.go:89] found id: ""
	I1213 10:10:27.793294  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.793303  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:27.793309  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:27.793381  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:27.819458  285837 cri.go:89] found id: ""
	I1213 10:10:27.819481  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.819490  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:27.819496  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:27.819585  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:27.844796  285837 cri.go:89] found id: ""
	I1213 10:10:27.844819  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.844828  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:27.844834  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:27.844892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:27.873605  285837 cri.go:89] found id: ""
	I1213 10:10:27.873629  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.873638  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:27.873644  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:27.873726  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:27.897452  285837 cri.go:89] found id: ""
	I1213 10:10:27.897476  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.897485  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:27.897491  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:27.897548  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:27.923761  285837 cri.go:89] found id: ""
	I1213 10:10:27.923786  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.923796  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:27.923802  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:27.923880  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:27.952811  285837 cri.go:89] found id: ""
	I1213 10:10:27.952875  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.952907  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:27.952949  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:27.952978  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:27.982369  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:27.982444  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:28.039695  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:28.039739  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:28.059367  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:28.059394  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:28.141898  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:28.133880    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.134265    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.135970    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.136432    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.138063    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:28.133880    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.134265    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.135970    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.136432    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.138063    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:28.141920  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:28.141931  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:30.668303  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:30.681191  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:30.681264  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:30.708785  285837 cri.go:89] found id: ""
	I1213 10:10:30.708809  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.708817  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:30.708823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:30.708887  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:30.733895  285837 cri.go:89] found id: ""
	I1213 10:10:30.733918  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.733926  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:30.733932  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:30.733991  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:30.762790  285837 cri.go:89] found id: ""
	I1213 10:10:30.762811  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.762820  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:30.762826  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:30.762891  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:30.786743  285837 cri.go:89] found id: ""
	I1213 10:10:30.786807  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.786829  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:30.786846  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:30.786925  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:30.813249  285837 cri.go:89] found id: ""
	I1213 10:10:30.813272  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.813281  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:30.813288  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:30.813347  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:30.837491  285837 cri.go:89] found id: ""
	I1213 10:10:30.837520  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.837529  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:30.837536  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:30.837596  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:30.862539  285837 cri.go:89] found id: ""
	I1213 10:10:30.862599  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.862622  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:30.862640  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:30.862714  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:30.887350  285837 cri.go:89] found id: ""
	I1213 10:10:30.887371  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.887379  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:30.887388  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:30.887399  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:30.943669  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:30.943701  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:30.957123  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:30.957172  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:31.036468  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:31.016437    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.017222    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.023761    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.024601    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.026097    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:31.016437    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.017222    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.023761    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.024601    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.026097    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:31.036496  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:31.036509  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:31.065951  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:31.065987  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:33.600787  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:33.611280  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:33.611352  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:33.640061  285837 cri.go:89] found id: ""
	I1213 10:10:33.640084  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.640093  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:33.640099  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:33.640159  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:33.664736  285837 cri.go:89] found id: ""
	I1213 10:10:33.664763  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.664772  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:33.664780  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:33.664839  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:33.688858  285837 cri.go:89] found id: ""
	I1213 10:10:33.688882  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.688892  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:33.688898  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:33.688955  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:33.719915  285837 cri.go:89] found id: ""
	I1213 10:10:33.719944  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.719953  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:33.719960  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:33.720015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:33.744897  285837 cri.go:89] found id: ""
	I1213 10:10:33.744927  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.744937  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:33.744943  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:33.745037  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:33.773037  285837 cri.go:89] found id: ""
	I1213 10:10:33.773059  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.773067  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:33.773073  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:33.773134  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:33.797407  285837 cri.go:89] found id: ""
	I1213 10:10:33.797433  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.797443  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:33.797449  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:33.797510  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:33.825833  285837 cri.go:89] found id: ""
	I1213 10:10:33.825859  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.825868  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:33.825877  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:33.825889  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:33.851755  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:33.851788  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:33.884360  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:33.884385  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:33.940045  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:33.940080  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:33.954004  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:33.954039  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:34.035282  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:34.013082    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.013966    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.015748    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.016268    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.020357    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:34.013082    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.013966    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.015748    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.016268    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.020357    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:36.535645  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:36.547382  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:36.547469  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:36.579677  285837 cri.go:89] found id: ""
	I1213 10:10:36.579701  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.579711  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:36.579725  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:36.579802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:36.606029  285837 cri.go:89] found id: ""
	I1213 10:10:36.606058  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.606067  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:36.606073  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:36.606134  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:36.631618  285837 cri.go:89] found id: ""
	I1213 10:10:36.631640  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.631649  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:36.631655  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:36.631712  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:36.656376  285837 cri.go:89] found id: ""
	I1213 10:10:36.656399  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.656407  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:36.656413  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:36.656469  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:36.684348  285837 cri.go:89] found id: ""
	I1213 10:10:36.684369  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.684377  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:36.684383  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:36.684443  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:36.708549  285837 cri.go:89] found id: ""
	I1213 10:10:36.708578  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.708587  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:36.708594  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:36.708653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:36.732630  285837 cri.go:89] found id: ""
	I1213 10:10:36.732659  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.732669  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:36.732677  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:36.732738  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:36.761465  285837 cri.go:89] found id: ""
	I1213 10:10:36.761493  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.761503  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:36.761513  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:36.761524  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:36.774752  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:36.774787  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:36.837540  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:36.829412    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.830021    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.831594    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.832091    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.833636    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:36.829412    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.830021    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.831594    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.832091    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.833636    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:36.837603  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:36.837625  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:36.862806  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:36.862844  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:36.893277  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:36.893302  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:39.453851  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:39.464513  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:39.464595  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:39.488288  285837 cri.go:89] found id: ""
	I1213 10:10:39.488310  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.488319  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:39.488329  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:39.488386  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:39.513054  285837 cri.go:89] found id: ""
	I1213 10:10:39.513077  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.513085  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:39.513091  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:39.513156  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:39.542442  285837 cri.go:89] found id: ""
	I1213 10:10:39.542465  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.542474  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:39.542480  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:39.542535  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:39.575244  285837 cri.go:89] found id: ""
	I1213 10:10:39.575271  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.575280  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:39.575286  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:39.575341  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:39.605371  285837 cri.go:89] found id: ""
	I1213 10:10:39.605402  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.605411  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:39.605417  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:39.605475  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:39.629581  285837 cri.go:89] found id: ""
	I1213 10:10:39.629608  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.629617  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:39.629624  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:39.629680  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:39.657061  285837 cri.go:89] found id: ""
	I1213 10:10:39.657089  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.657098  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:39.657104  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:39.657162  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:39.680815  285837 cri.go:89] found id: ""
	I1213 10:10:39.680880  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.680894  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:39.680904  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:39.680915  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:39.738790  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:39.738822  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:39.751947  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:39.751976  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:39.816341  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:39.808739    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.809296    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.810748    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.811148    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.812544    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:39.808739    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.809296    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.810748    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.811148    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.812544    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:39.816364  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:39.816376  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:39.841100  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:39.841132  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:42.369166  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:42.380009  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:42.380075  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:42.411353  285837 cri.go:89] found id: ""
	I1213 10:10:42.411380  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.411390  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:42.411397  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:42.411455  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:42.436688  285837 cri.go:89] found id: ""
	I1213 10:10:42.436718  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.436728  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:42.436734  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:42.436816  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:42.462185  285837 cri.go:89] found id: ""
	I1213 10:10:42.462211  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.462220  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:42.462226  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:42.462285  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:42.487623  285837 cri.go:89] found id: ""
	I1213 10:10:42.487647  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.487657  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:42.487663  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:42.487722  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:42.513508  285837 cri.go:89] found id: ""
	I1213 10:10:42.513534  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.513543  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:42.513549  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:42.513610  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:42.544400  285837 cri.go:89] found id: ""
	I1213 10:10:42.544424  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.544432  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:42.544439  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:42.544498  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:42.571251  285837 cri.go:89] found id: ""
	I1213 10:10:42.571281  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.571290  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:42.571297  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:42.571353  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:42.608069  285837 cri.go:89] found id: ""
	I1213 10:10:42.608094  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.608103  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:42.608113  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:42.608124  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:42.663779  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:42.663815  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:42.677800  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:42.677839  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:42.742889  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:42.733865    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.734680    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736280    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736823    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.738447    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:42.733865    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.734680    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736280    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736823    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.738447    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:42.742913  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:42.742927  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:42.769648  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:42.769682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:45.299918  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:45.313054  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:45.313153  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:45.339870  285837 cri.go:89] found id: ""
	I1213 10:10:45.339904  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.339914  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:45.339935  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:45.340013  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:45.364702  285837 cri.go:89] found id: ""
	I1213 10:10:45.364736  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.364746  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:45.364752  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:45.364815  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:45.389159  285837 cri.go:89] found id: ""
	I1213 10:10:45.389189  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.389200  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:45.389206  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:45.389286  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:45.413889  285837 cri.go:89] found id: ""
	I1213 10:10:45.413918  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.413927  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:45.413933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:45.414000  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:45.438849  285837 cri.go:89] found id: ""
	I1213 10:10:45.438885  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.438895  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:45.438901  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:45.438962  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:45.469093  285837 cri.go:89] found id: ""
	I1213 10:10:45.469116  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.469124  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:45.469130  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:45.469233  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:45.493365  285837 cri.go:89] found id: ""
	I1213 10:10:45.493391  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.493401  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:45.493408  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:45.493465  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:45.517810  285837 cri.go:89] found id: ""
	I1213 10:10:45.517839  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.517848  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:45.517858  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:45.517870  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:45.532750  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:45.532781  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:45.610253  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:45.601970    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.602367    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604011    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604678    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.606346    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:45.601970    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.602367    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604011    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604678    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.606346    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:45.610276  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:45.610289  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:45.635170  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:45.635201  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:45.662649  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:45.662727  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:48.218853  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:48.230454  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:48.230539  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:48.256210  285837 cri.go:89] found id: ""
	I1213 10:10:48.256235  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.256244  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:48.256250  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:48.256311  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:48.288857  285837 cri.go:89] found id: ""
	I1213 10:10:48.288882  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.288891  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:48.288897  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:48.288952  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:48.317960  285837 cri.go:89] found id: ""
	I1213 10:10:48.317994  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.318020  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:48.318034  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:48.318108  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:48.347646  285837 cri.go:89] found id: ""
	I1213 10:10:48.347724  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.347738  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:48.347746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:48.347815  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:48.372818  285837 cri.go:89] found id: ""
	I1213 10:10:48.372840  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.372849  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:48.372855  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:48.372915  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:48.400208  285837 cri.go:89] found id: ""
	I1213 10:10:48.400281  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.400296  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:48.400304  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:48.400373  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:48.424245  285837 cri.go:89] found id: ""
	I1213 10:10:48.424272  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.424282  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:48.424287  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:48.424345  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:48.450041  285837 cri.go:89] found id: ""
	I1213 10:10:48.450074  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.450083  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:48.450092  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:48.450103  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:48.516704  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:48.507097    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.507702    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509433    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509994    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.511703    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:48.507097    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.507702    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509433    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509994    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.511703    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:48.516726  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:48.516739  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:48.544227  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:48.544262  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:48.581036  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:48.581067  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:48.643405  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:48.643440  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:51.157408  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:51.168232  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:51.168298  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:51.194497  285837 cri.go:89] found id: ""
	I1213 10:10:51.194533  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.194545  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:51.194552  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:51.194619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:51.219079  285837 cri.go:89] found id: ""
	I1213 10:10:51.219099  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.219107  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:51.219112  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:51.219167  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:51.244709  285837 cri.go:89] found id: ""
	I1213 10:10:51.244732  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.244740  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:51.244747  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:51.244806  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:51.284617  285837 cri.go:89] found id: ""
	I1213 10:10:51.284643  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.284651  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:51.284657  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:51.284713  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:51.314124  285837 cri.go:89] found id: ""
	I1213 10:10:51.314152  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.314162  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:51.314170  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:51.314228  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:51.346119  285837 cri.go:89] found id: ""
	I1213 10:10:51.346144  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.346153  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:51.346160  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:51.346218  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:51.371813  285837 cri.go:89] found id: ""
	I1213 10:10:51.371841  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.371850  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:51.371861  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:51.371918  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:51.397126  285837 cri.go:89] found id: ""
	I1213 10:10:51.397150  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.397159  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:51.397174  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:51.397216  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:51.426866  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:51.426894  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:51.483164  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:51.483196  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:51.497003  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:51.497028  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:51.582114  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:51.573287    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.574073    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.575716    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.576298    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.577879    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:51.573287    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.574073    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.575716    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.576298    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.577879    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:51.582138  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:51.582151  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:54.110647  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:54.121581  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:54.121653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:54.145568  285837 cri.go:89] found id: ""
	I1213 10:10:54.145591  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.145600  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:54.145606  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:54.145667  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:54.171162  285837 cri.go:89] found id: ""
	I1213 10:10:54.171186  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.171195  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:54.171202  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:54.171258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:54.196117  285837 cri.go:89] found id: ""
	I1213 10:10:54.196140  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.196148  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:54.196154  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:54.196211  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:54.221183  285837 cri.go:89] found id: ""
	I1213 10:10:54.221226  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.221236  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:54.221243  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:54.221300  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:54.246527  285837 cri.go:89] found id: ""
	I1213 10:10:54.246569  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.246578  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:54.246585  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:54.246648  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:54.273839  285837 cri.go:89] found id: ""
	I1213 10:10:54.273866  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.273875  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:54.273881  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:54.273936  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:54.305443  285837 cri.go:89] found id: ""
	I1213 10:10:54.305468  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.305477  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:54.305483  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:54.305566  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:54.337568  285837 cri.go:89] found id: ""
	I1213 10:10:54.337634  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.337649  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:54.337659  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:54.337671  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:54.394420  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:54.394456  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:54.408137  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:54.408167  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:54.476257  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:54.467629    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.468321    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470154    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470801    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.472389    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:54.467629    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.468321    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470154    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470801    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.472389    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:54.476279  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:54.476294  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:54.501779  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:54.501818  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:57.039708  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:57.051575  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:57.051656  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:57.077147  285837 cri.go:89] found id: ""
	I1213 10:10:57.077171  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.077180  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:57.077186  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:57.077249  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:57.100638  285837 cri.go:89] found id: ""
	I1213 10:10:57.100662  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.100672  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:57.100679  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:57.100736  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:57.124849  285837 cri.go:89] found id: ""
	I1213 10:10:57.124872  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.124880  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:57.124886  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:57.124942  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:57.149947  285837 cri.go:89] found id: ""
	I1213 10:10:57.149970  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.149979  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:57.149985  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:57.150041  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:57.177921  285837 cri.go:89] found id: ""
	I1213 10:10:57.177944  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.177952  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:57.177958  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:57.178015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:57.202761  285837 cri.go:89] found id: ""
	I1213 10:10:57.202785  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.202793  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:57.202799  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:57.202861  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:57.232853  285837 cri.go:89] found id: ""
	I1213 10:10:57.232880  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.232890  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:57.232896  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:57.232958  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:57.257698  285837 cri.go:89] found id: ""
	I1213 10:10:57.257725  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.257734  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:57.257744  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:57.257754  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:57.284012  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:57.284084  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:57.318707  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:57.318744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:57.380534  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:57.380571  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:57.394671  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:57.394704  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:57.463198  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:57.453865    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.454679    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.456385    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.457080    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.458704    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:57.453865    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.454679    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.456385    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.457080    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.458704    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:59.963429  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:59.974005  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:59.974074  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:00.002819  285837 cri.go:89] found id: ""
	I1213 10:11:00.002842  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.002853  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:00.002860  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:00.002927  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:00.094025  285837 cri.go:89] found id: ""
	I1213 10:11:00.094053  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.094064  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:00.094071  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:00.094142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:00.174313  285837 cri.go:89] found id: ""
	I1213 10:11:00.174336  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.174345  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:00.174352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:00.174417  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:00.249900  285837 cri.go:89] found id: ""
	I1213 10:11:00.249939  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.249949  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:00.249968  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:00.250053  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:00.326093  285837 cri.go:89] found id: ""
	I1213 10:11:00.326121  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.326130  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:00.326138  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:00.326207  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:00.398659  285837 cri.go:89] found id: ""
	I1213 10:11:00.398685  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.398695  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:00.398702  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:00.398771  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:00.438080  285837 cri.go:89] found id: ""
	I1213 10:11:00.438106  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.438116  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:00.438123  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:00.438200  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:00.466610  285837 cri.go:89] found id: ""
	I1213 10:11:00.466635  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.466644  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:00.466655  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:00.466668  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:00.524796  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:00.524832  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:00.541430  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:00.541461  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:00.620210  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:00.611464    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.612064    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.613626    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.614181    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.615780    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:00.611464    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.612064    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.613626    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.614181    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.615780    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:00.620234  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:00.620248  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:00.646443  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:00.646481  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:03.175597  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:03.187100  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:03.187169  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:03.213075  285837 cri.go:89] found id: ""
	I1213 10:11:03.213099  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.213108  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:03.213114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:03.213173  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:03.238387  285837 cri.go:89] found id: ""
	I1213 10:11:03.238413  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.238422  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:03.238428  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:03.238485  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:03.263021  285837 cri.go:89] found id: ""
	I1213 10:11:03.263047  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.263057  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:03.263064  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:03.263120  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:03.287967  285837 cri.go:89] found id: ""
	I1213 10:11:03.287990  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.287999  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:03.288005  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:03.288070  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:03.313649  285837 cri.go:89] found id: ""
	I1213 10:11:03.313676  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.313685  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:03.313691  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:03.313782  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:03.341329  285837 cri.go:89] found id: ""
	I1213 10:11:03.341395  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.341410  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:03.341418  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:03.341480  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:03.367350  285837 cri.go:89] found id: ""
	I1213 10:11:03.367376  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.367386  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:03.367392  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:03.367450  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:03.394523  285837 cri.go:89] found id: ""
	I1213 10:11:03.394548  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.394556  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:03.394566  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:03.394579  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:03.408418  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:03.408444  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:03.481932  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:03.473186    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.474065    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.475730    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.476279    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.477971    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:03.473186    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.474065    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.475730    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.476279    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.477971    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:03.481953  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:03.481965  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:03.508165  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:03.508197  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:03.564104  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:03.564135  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:06.137748  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:06.148529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:06.148601  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:06.173118  285837 cri.go:89] found id: ""
	I1213 10:11:06.173142  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.173151  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:06.173164  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:06.173225  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:06.198710  285837 cri.go:89] found id: ""
	I1213 10:11:06.198732  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.198741  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:06.198747  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:06.198802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:06.224139  285837 cri.go:89] found id: ""
	I1213 10:11:06.224163  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.224171  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:06.224183  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:06.224246  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:06.249528  285837 cri.go:89] found id: ""
	I1213 10:11:06.249553  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.249568  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:06.249577  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:06.249636  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:06.283856  285837 cri.go:89] found id: ""
	I1213 10:11:06.283886  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.283894  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:06.283901  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:06.283964  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:06.307922  285837 cri.go:89] found id: ""
	I1213 10:11:06.307947  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.307956  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:06.307963  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:06.308020  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:06.332705  285837 cri.go:89] found id: ""
	I1213 10:11:06.332731  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.332739  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:06.332746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:06.332805  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:06.358646  285837 cri.go:89] found id: ""
	I1213 10:11:06.358672  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.358681  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:06.358691  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:06.358702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:06.414726  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:06.414763  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:06.428830  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:06.428866  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:06.495345  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:06.486296    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.486912    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.488721    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.489058    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.490614    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:06.486296    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.486912    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.488721    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.489058    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.490614    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:06.495373  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:06.495386  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:06.523314  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:06.523359  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:09.076696  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:09.087477  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:09.087569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:09.111658  285837 cri.go:89] found id: ""
	I1213 10:11:09.111681  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.111690  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:09.111696  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:09.111759  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:09.135775  285837 cri.go:89] found id: ""
	I1213 10:11:09.135801  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.135809  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:09.135816  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:09.135872  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:09.165477  285837 cri.go:89] found id: ""
	I1213 10:11:09.165500  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.165514  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:09.165520  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:09.165576  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:09.194399  285837 cri.go:89] found id: ""
	I1213 10:11:09.194421  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.194437  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:09.194446  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:09.194503  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:09.223486  285837 cri.go:89] found id: ""
	I1213 10:11:09.223508  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.223537  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:09.223544  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:09.223603  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:09.252819  285837 cri.go:89] found id: ""
	I1213 10:11:09.252842  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.252851  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:09.252857  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:09.252916  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:09.277570  285837 cri.go:89] found id: ""
	I1213 10:11:09.277641  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.277656  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:09.277666  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:09.277729  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:09.302629  285837 cri.go:89] found id: ""
	I1213 10:11:09.302652  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.302661  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:09.302671  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:09.302682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:09.358773  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:09.358811  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:09.372815  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:09.372842  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:09.441717  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:09.432460    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.433025    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.434726    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.435463    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.437132    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:09.432460    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.433025    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.434726    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.435463    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.437132    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:09.441793  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:09.441822  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:09.466485  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:09.466517  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:11.993817  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:12.018615  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:12.018690  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:12.044911  285837 cri.go:89] found id: ""
	I1213 10:11:12.044934  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.044943  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:12.044949  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:12.045013  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:12.069918  285837 cri.go:89] found id: ""
	I1213 10:11:12.069940  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.069949  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:12.069955  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:12.070018  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:12.094440  285837 cri.go:89] found id: ""
	I1213 10:11:12.094461  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.094470  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:12.094476  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:12.094530  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:12.118079  285837 cri.go:89] found id: ""
	I1213 10:11:12.118099  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.118108  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:12.118114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:12.118169  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:12.145090  285837 cri.go:89] found id: ""
	I1213 10:11:12.145115  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.145125  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:12.145131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:12.145186  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:12.168654  285837 cri.go:89] found id: ""
	I1213 10:11:12.168725  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.168749  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:12.168762  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:12.168820  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:12.192603  285837 cri.go:89] found id: ""
	I1213 10:11:12.192677  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.192704  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:12.192726  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:12.192802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:12.216389  285837 cri.go:89] found id: ""
	I1213 10:11:12.216454  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.216478  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:12.216501  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:12.216517  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:12.273281  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:12.273315  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:12.286866  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:12.286903  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:12.353852  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:12.345393    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.345982    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.347576    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.348061    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.349757    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:12.345393    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.345982    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.347576    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.348061    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.349757    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:12.353884  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:12.353914  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:12.379896  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:12.379931  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:14.910354  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:14.920854  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:14.920922  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:14.946408  285837 cri.go:89] found id: ""
	I1213 10:11:14.946430  285837 logs.go:282] 0 containers: []
	W1213 10:11:14.946439  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:14.946446  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:14.946501  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:14.977293  285837 cri.go:89] found id: ""
	I1213 10:11:14.977322  285837 logs.go:282] 0 containers: []
	W1213 10:11:14.977337  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:14.977343  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:14.977414  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:15.010967  285837 cri.go:89] found id: ""
	I1213 10:11:15.011055  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.011079  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:15.011098  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:15.011201  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:15.050270  285837 cri.go:89] found id: ""
	I1213 10:11:15.050294  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.050314  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:15.050321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:15.050387  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:15.076902  285837 cri.go:89] found id: ""
	I1213 10:11:15.076927  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.076936  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:15.076943  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:15.077003  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:15.106349  285837 cri.go:89] found id: ""
	I1213 10:11:15.106379  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.106389  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:15.106395  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:15.106458  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:15.134472  285837 cri.go:89] found id: ""
	I1213 10:11:15.134497  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.134506  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:15.134512  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:15.134569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:15.161713  285837 cri.go:89] found id: ""
	I1213 10:11:15.161740  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.161750  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:15.161759  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:15.161773  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:15.217480  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:15.217512  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:15.231189  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:15.231217  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:15.304481  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:15.289570    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.290198    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298086    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298782    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.300522    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:15.289570    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.290198    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298086    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298782    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.300522    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:15.304502  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:15.304515  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:15.329819  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:15.329853  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:17.857044  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:17.868755  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:17.868830  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:17.892866  285837 cri.go:89] found id: ""
	I1213 10:11:17.892890  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.892900  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:17.892906  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:17.892969  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:17.918428  285837 cri.go:89] found id: ""
	I1213 10:11:17.918450  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.918459  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:17.918467  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:17.918520  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:17.941924  285837 cri.go:89] found id: ""
	I1213 10:11:17.941945  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.941953  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:17.941959  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:17.942015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:17.966130  285837 cri.go:89] found id: ""
	I1213 10:11:17.966153  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.966162  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:17.966168  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:17.966266  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:17.994412  285837 cri.go:89] found id: ""
	I1213 10:11:17.994437  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.994446  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:17.994452  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:17.994509  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:18.020369  285837 cri.go:89] found id: ""
	I1213 10:11:18.020392  285837 logs.go:282] 0 containers: []
	W1213 10:11:18.020401  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:18.020407  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:18.020485  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:18.047590  285837 cri.go:89] found id: ""
	I1213 10:11:18.047614  285837 logs.go:282] 0 containers: []
	W1213 10:11:18.047623  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:18.047629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:18.047689  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:18.074433  285837 cri.go:89] found id: ""
	I1213 10:11:18.074456  285837 logs.go:282] 0 containers: []
	W1213 10:11:18.074465  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:18.074475  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:18.074487  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:18.101094  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:18.101129  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:18.129666  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:18.129695  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:18.185620  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:18.185652  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:18.199477  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:18.199503  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:18.264408  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:18.255885    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.256623    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258349    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258851    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.260403    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:18.255885    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.256623    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258349    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258851    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.260403    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:20.765401  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:20.778692  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:20.778759  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:20.856776  285837 cri.go:89] found id: ""
	I1213 10:11:20.856798  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.856807  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:20.856813  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:20.856871  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:20.886867  285837 cri.go:89] found id: ""
	I1213 10:11:20.886896  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.886912  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:20.886918  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:20.886992  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:20.915220  285837 cri.go:89] found id: ""
	I1213 10:11:20.915245  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.915254  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:20.915260  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:20.915318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:20.939562  285837 cri.go:89] found id: ""
	I1213 10:11:20.939585  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.939594  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:20.939600  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:20.939667  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:20.964172  285837 cri.go:89] found id: ""
	I1213 10:11:20.964195  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.964204  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:20.964210  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:20.964269  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:20.989184  285837 cri.go:89] found id: ""
	I1213 10:11:20.989206  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.989215  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:20.989221  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:20.989287  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:21.015584  285837 cri.go:89] found id: ""
	I1213 10:11:21.015608  285837 logs.go:282] 0 containers: []
	W1213 10:11:21.015616  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:21.015623  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:21.015692  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:21.041789  285837 cri.go:89] found id: ""
	I1213 10:11:21.041812  285837 logs.go:282] 0 containers: []
	W1213 10:11:21.041820  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:21.041829  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:21.041842  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:21.055424  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:21.055450  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:21.119438  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:21.111338    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.112051    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.113609    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.114084    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.115730    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:21.111338    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.112051    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.113609    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.114084    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.115730    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:21.119456  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:21.119469  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:21.144678  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:21.144713  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:21.177284  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:21.177313  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:23.742410  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:23.752527  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:23.752601  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:23.812957  285837 cri.go:89] found id: ""
	I1213 10:11:23.812979  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.812987  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:23.812994  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:23.813052  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:23.858208  285837 cri.go:89] found id: ""
	I1213 10:11:23.858236  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.858246  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:23.858253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:23.858315  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:23.885293  285837 cri.go:89] found id: ""
	I1213 10:11:23.885318  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.885328  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:23.885334  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:23.885396  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:23.911374  285837 cri.go:89] found id: ""
	I1213 10:11:23.911399  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.911409  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:23.911541  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:23.911621  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:23.940586  285837 cri.go:89] found id: ""
	I1213 10:11:23.940611  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.940620  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:23.940625  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:23.940683  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:23.965387  285837 cri.go:89] found id: ""
	I1213 10:11:23.965413  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.965423  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:23.965430  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:23.965491  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:23.989910  285837 cri.go:89] found id: ""
	I1213 10:11:23.989936  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.989945  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:23.989952  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:23.990009  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:24.016511  285837 cri.go:89] found id: ""
	I1213 10:11:24.016539  285837 logs.go:282] 0 containers: []
	W1213 10:11:24.016548  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:24.016558  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:24.016569  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:24.076500  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:24.076542  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:24.090891  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:24.090920  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:24.158444  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:24.150405    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.151121    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.152642    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.153107    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.154567    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:24.150405    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.151121    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.152642    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.153107    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.154567    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:24.158466  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:24.158478  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:24.184352  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:24.184389  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:26.715866  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:26.726291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:26.726358  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:26.749726  285837 cri.go:89] found id: ""
	I1213 10:11:26.749748  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.749757  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:26.749763  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:26.749820  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:26.798311  285837 cri.go:89] found id: ""
	I1213 10:11:26.798333  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.798341  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:26.798347  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:26.798403  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:26.855482  285837 cri.go:89] found id: ""
	I1213 10:11:26.855506  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.855541  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:26.855548  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:26.855606  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:26.887763  285837 cri.go:89] found id: ""
	I1213 10:11:26.887833  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.887857  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:26.887876  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:26.887963  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:26.913160  285837 cri.go:89] found id: ""
	I1213 10:11:26.913183  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.913192  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:26.913199  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:26.913266  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:26.940887  285837 cri.go:89] found id: ""
	I1213 10:11:26.940965  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.940996  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:26.941004  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:26.941070  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:26.965212  285837 cri.go:89] found id: ""
	I1213 10:11:26.965233  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.965242  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:26.965248  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:26.965313  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:26.989687  285837 cri.go:89] found id: ""
	I1213 10:11:26.989710  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.989718  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:26.989733  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:26.989744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:27.020130  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:27.020156  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:27.075963  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:27.076001  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:27.089421  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:27.089452  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:27.154208  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:27.146008   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.146794   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148444   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148744   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.150216   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:27.146008   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.146794   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148444   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148744   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.150216   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:27.154231  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:27.154243  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:29.679077  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:29.689987  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:29.690113  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:29.719241  285837 cri.go:89] found id: ""
	I1213 10:11:29.719304  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.719318  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:29.719325  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:29.719382  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:29.745413  285837 cri.go:89] found id: ""
	I1213 10:11:29.745511  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.745533  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:29.745541  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:29.745624  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:29.789119  285837 cri.go:89] found id: ""
	I1213 10:11:29.789193  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.789228  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:29.789251  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:29.789362  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:29.865327  285837 cri.go:89] found id: ""
	I1213 10:11:29.865413  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.865429  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:29.865437  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:29.865495  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:29.890183  285837 cri.go:89] found id: ""
	I1213 10:11:29.890260  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.890283  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:29.890301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:29.890397  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:29.919549  285837 cri.go:89] found id: ""
	I1213 10:11:29.919622  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.919646  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:29.919666  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:29.919771  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:29.945219  285837 cri.go:89] found id: ""
	I1213 10:11:29.945248  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.945257  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:29.945264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:29.945364  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:29.973791  285837 cri.go:89] found id: ""
	I1213 10:11:29.973822  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.973832  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:29.973842  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:29.973870  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:30.030470  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:30.030512  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:30.047458  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:30.047559  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:30.123116  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:30.112772   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.113344   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.115682   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.116490   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.118243   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:30.112772   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.113344   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.115682   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.116490   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.118243   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:30.123215  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:30.123250  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:30.149652  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:30.149689  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:32.679599  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:32.690298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:32.690372  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:32.713694  285837 cri.go:89] found id: ""
	I1213 10:11:32.713718  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.713726  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:32.713733  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:32.713790  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:32.738621  285837 cri.go:89] found id: ""
	I1213 10:11:32.738645  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.738654  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:32.738660  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:32.738720  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:32.762830  285837 cri.go:89] found id: ""
	I1213 10:11:32.762855  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.762865  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:32.762871  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:32.762928  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:32.799422  285837 cri.go:89] found id: ""
	I1213 10:11:32.799448  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.799464  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:32.799471  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:32.799543  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:32.856726  285837 cri.go:89] found id: ""
	I1213 10:11:32.856759  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.856768  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:32.856775  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:32.856839  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:32.883319  285837 cri.go:89] found id: ""
	I1213 10:11:32.883346  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.883356  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:32.883362  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:32.883422  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:32.909028  285837 cri.go:89] found id: ""
	I1213 10:11:32.909054  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.909063  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:32.909070  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:32.909127  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:32.938657  285837 cri.go:89] found id: ""
	I1213 10:11:32.938691  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.938701  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:32.938710  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:32.938721  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:32.994400  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:32.994434  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:33.008614  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:33.008653  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:33.076509  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:33.069025   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.069462   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.070984   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.071296   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.072695   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:33.069025   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.069462   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.070984   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.071296   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.072695   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:33.076539  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:33.076553  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:33.101599  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:33.101631  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:35.629072  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:35.639660  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:35.639731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:35.664032  285837 cri.go:89] found id: ""
	I1213 10:11:35.664060  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.664068  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:35.664076  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:35.664130  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:35.692081  285837 cri.go:89] found id: ""
	I1213 10:11:35.692108  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.692118  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:35.692124  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:35.692180  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:35.717152  285837 cri.go:89] found id: ""
	I1213 10:11:35.717177  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.717186  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:35.717192  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:35.717251  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:35.741898  285837 cri.go:89] found id: ""
	I1213 10:11:35.741931  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.741940  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:35.741946  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:35.742013  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:35.766255  285837 cri.go:89] found id: ""
	I1213 10:11:35.766289  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.766298  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:35.766305  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:35.766370  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:35.829052  285837 cri.go:89] found id: ""
	I1213 10:11:35.829093  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.829104  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:35.829111  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:35.829189  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:35.872000  285837 cri.go:89] found id: ""
	I1213 10:11:35.872072  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.872085  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:35.872092  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:35.872162  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:35.897842  285837 cri.go:89] found id: ""
	I1213 10:11:35.897874  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.897883  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:35.897893  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:35.897911  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:35.955605  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:35.955640  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:35.969234  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:35.969262  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:36.035000  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:36.026108   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.026822   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.028473   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.029109   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.030769   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:36.026108   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.026822   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.028473   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.029109   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.030769   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:36.035063  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:36.035083  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:36.061000  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:36.061037  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:38.589308  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:38.599753  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:38.599818  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:38.623379  285837 cri.go:89] found id: ""
	I1213 10:11:38.623400  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.623409  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:38.623418  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:38.623476  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:38.649806  285837 cri.go:89] found id: ""
	I1213 10:11:38.649830  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.649840  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:38.649847  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:38.649908  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:38.674234  285837 cri.go:89] found id: ""
	I1213 10:11:38.674257  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.674266  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:38.674272  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:38.674334  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:38.698759  285837 cri.go:89] found id: ""
	I1213 10:11:38.698780  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.698789  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:38.698795  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:38.698851  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:38.725178  285837 cri.go:89] found id: ""
	I1213 10:11:38.725205  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.725215  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:38.725222  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:38.725281  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:38.766167  285837 cri.go:89] found id: ""
	I1213 10:11:38.766194  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.766204  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:38.766210  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:38.766265  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:38.808982  285837 cri.go:89] found id: ""
	I1213 10:11:38.809009  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.809017  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:38.809023  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:38.809080  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:38.870538  285837 cri.go:89] found id: ""
	I1213 10:11:38.870560  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.870568  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:38.870578  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:38.870589  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:38.928916  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:38.928958  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:38.943274  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:38.943304  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:39.011182  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:38.999196   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:38.999883   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.001952   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.004117   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.005416   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:38.999196   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:38.999883   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.001952   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.004117   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.005416   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:39.011208  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:39.011223  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:39.038343  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:39.038377  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:41.571555  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:41.582245  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:41.582319  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:41.609447  285837 cri.go:89] found id: ""
	I1213 10:11:41.609473  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.609483  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:41.609490  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:41.609546  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:41.637801  285837 cri.go:89] found id: ""
	I1213 10:11:41.637823  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.637832  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:41.637838  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:41.637901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:41.661762  285837 cri.go:89] found id: ""
	I1213 10:11:41.661786  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.661795  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:41.661801  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:41.661865  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:41.685944  285837 cri.go:89] found id: ""
	I1213 10:11:41.685966  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.685981  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:41.685987  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:41.686044  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:41.710847  285837 cri.go:89] found id: ""
	I1213 10:11:41.710874  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.710883  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:41.710889  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:41.710947  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:41.739921  285837 cri.go:89] found id: ""
	I1213 10:11:41.739947  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.739956  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:41.739962  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:41.740021  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:41.764216  285837 cri.go:89] found id: ""
	I1213 10:11:41.764245  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.764254  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:41.764260  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:41.764318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:41.822929  285837 cri.go:89] found id: ""
	I1213 10:11:41.822960  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.822969  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:41.822995  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:41.823012  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:41.860056  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:41.860087  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:41.916192  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:41.916225  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:41.932977  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:41.933051  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:41.996358  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:41.988135   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.988678   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990296   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990680   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.992332   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:41.988135   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.988678   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990296   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990680   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.992332   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:41.996420  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:41.996436  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:44.525380  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:44.536068  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:44.536183  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:44.561444  285837 cri.go:89] found id: ""
	I1213 10:11:44.561476  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.561485  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:44.561491  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:44.561552  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:44.586945  285837 cri.go:89] found id: ""
	I1213 10:11:44.586975  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.586985  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:44.586991  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:44.587057  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:44.612842  285837 cri.go:89] found id: ""
	I1213 10:11:44.612874  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.612885  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:44.612891  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:44.612949  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:44.638444  285837 cri.go:89] found id: ""
	I1213 10:11:44.638472  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.638482  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:44.638489  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:44.638547  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:44.664168  285837 cri.go:89] found id: ""
	I1213 10:11:44.664191  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.664200  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:44.664206  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:44.664264  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:44.693563  285837 cri.go:89] found id: ""
	I1213 10:11:44.693634  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.693659  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:44.693675  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:44.693748  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:44.719349  285837 cri.go:89] found id: ""
	I1213 10:11:44.719376  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.719385  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:44.719391  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:44.719456  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:44.744438  285837 cri.go:89] found id: ""
	I1213 10:11:44.744467  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.744476  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:44.744485  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:44.744498  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:44.815232  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:44.815321  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:44.836304  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:44.836331  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:44.928422  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:44.919472   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.920276   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.921982   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.922615   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.924346   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:44.919472   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.920276   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.921982   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.922615   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.924346   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:44.928443  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:44.928456  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:44.954308  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:44.954348  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:47.482268  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:47.492724  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:47.492804  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:47.517619  285837 cri.go:89] found id: ""
	I1213 10:11:47.517646  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.517655  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:47.517661  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:47.517731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:47.543100  285837 cri.go:89] found id: ""
	I1213 10:11:47.543137  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.543150  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:47.543160  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:47.543223  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:47.573882  285837 cri.go:89] found id: ""
	I1213 10:11:47.573906  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.573915  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:47.573922  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:47.573979  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:47.598649  285837 cri.go:89] found id: ""
	I1213 10:11:47.598676  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.598685  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:47.598692  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:47.598753  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:47.629998  285837 cri.go:89] found id: ""
	I1213 10:11:47.630034  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.630048  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:47.630056  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:47.630135  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:47.658608  285837 cri.go:89] found id: ""
	I1213 10:11:47.658652  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.658662  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:47.658669  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:47.658739  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:47.685293  285837 cri.go:89] found id: ""
	I1213 10:11:47.685337  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.685346  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:47.685352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:47.685419  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:47.711048  285837 cri.go:89] found id: ""
	I1213 10:11:47.711072  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.711081  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:47.711091  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:47.711102  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:47.774561  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:47.774611  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:47.814155  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:47.814228  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:47.909982  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:47.900447   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.900974   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.902686   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.903281   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.905051   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:47.900447   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.900974   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.902686   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.903281   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.905051   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:47.910015  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:47.910028  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:47.938465  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:47.938502  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:50.475972  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:50.488352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:50.488421  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:50.513516  285837 cri.go:89] found id: ""
	I1213 10:11:50.513548  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.513558  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:50.513565  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:50.513619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:50.538473  285837 cri.go:89] found id: ""
	I1213 10:11:50.538498  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.538507  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:50.538513  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:50.538569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:50.562753  285837 cri.go:89] found id: ""
	I1213 10:11:50.562775  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.562784  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:50.562790  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:50.562844  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:50.587561  285837 cri.go:89] found id: ""
	I1213 10:11:50.587587  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.587597  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:50.587603  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:50.587658  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:50.612019  285837 cri.go:89] found id: ""
	I1213 10:11:50.612048  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.612058  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:50.612064  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:50.612123  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:50.636935  285837 cri.go:89] found id: ""
	I1213 10:11:50.636959  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.636967  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:50.636973  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:50.637034  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:50.661053  285837 cri.go:89] found id: ""
	I1213 10:11:50.661076  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.661085  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:50.661091  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:50.661148  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:50.690108  285837 cri.go:89] found id: ""
	I1213 10:11:50.690178  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.690201  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:50.690223  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:50.690262  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:50.748741  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:50.748775  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:50.762458  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:50.762490  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:50.892763  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:50.884204   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.884596   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886208   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886796   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.888413   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:50.884204   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.884596   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886208   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886796   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.888413   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:50.892783  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:50.892796  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:50.918206  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:50.918240  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:53.447378  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:53.457486  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:53.457551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:53.482258  285837 cri.go:89] found id: ""
	I1213 10:11:53.482283  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.482292  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:53.482299  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:53.482357  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:53.511304  285837 cri.go:89] found id: ""
	I1213 10:11:53.511330  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.511339  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:53.511345  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:53.511405  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:53.540251  285837 cri.go:89] found id: ""
	I1213 10:11:53.540277  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.540286  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:53.540291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:53.540349  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:53.565753  285837 cri.go:89] found id: ""
	I1213 10:11:53.565781  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.565791  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:53.565797  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:53.565855  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:53.595124  285837 cri.go:89] found id: ""
	I1213 10:11:53.595151  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.595160  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:53.595166  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:53.595224  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:53.620269  285837 cri.go:89] found id: ""
	I1213 10:11:53.620293  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.620302  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:53.620311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:53.620369  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:53.645281  285837 cri.go:89] found id: ""
	I1213 10:11:53.645309  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.645318  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:53.645325  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:53.645388  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:53.670326  285837 cri.go:89] found id: ""
	I1213 10:11:53.670351  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.670360  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:53.670369  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:53.670386  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:53.726845  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:53.726879  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:53.740167  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:53.740194  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:53.843634  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:53.828906   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830317   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830864   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.835449   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.836015   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:53.828906   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830317   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830864   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.835449   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.836015   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:53.843657  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:53.843669  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:53.870910  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:53.870995  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:56.405428  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:56.415940  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:56.416016  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:56.449974  285837 cri.go:89] found id: ""
	I1213 10:11:56.449996  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.450004  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:56.450010  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:56.450069  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:56.474847  285837 cri.go:89] found id: ""
	I1213 10:11:56.474873  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.474882  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:56.474888  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:56.474946  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:56.504742  285837 cri.go:89] found id: ""
	I1213 10:11:56.504768  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.504777  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:56.504783  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:56.504841  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:56.529471  285837 cri.go:89] found id: ""
	I1213 10:11:56.529493  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.529502  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:56.529509  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:56.529569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:56.553719  285837 cri.go:89] found id: ""
	I1213 10:11:56.553740  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.553749  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:56.553755  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:56.553812  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:56.579917  285837 cri.go:89] found id: ""
	I1213 10:11:56.579942  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.579950  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:56.579957  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:56.580015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:56.603606  285837 cri.go:89] found id: ""
	I1213 10:11:56.603629  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.603638  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:56.603644  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:56.603702  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:56.628438  285837 cri.go:89] found id: ""
	I1213 10:11:56.628460  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.628469  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:56.628479  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:56.628491  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:56.655218  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:56.655245  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:56.711105  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:56.711138  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:56.724564  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:56.724597  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:56.800105  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:56.791391   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.792277   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.793993   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.794273   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.795851   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:56.791391   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.792277   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.793993   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.794273   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.795851   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:56.800126  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:56.800141  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:59.341824  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:59.351965  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:59.352032  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:59.376522  285837 cri.go:89] found id: ""
	I1213 10:11:59.376544  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.376553  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:59.376559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:59.376623  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:59.405422  285837 cri.go:89] found id: ""
	I1213 10:11:59.405497  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.405522  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:59.405537  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:59.405608  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:59.430317  285837 cri.go:89] found id: ""
	I1213 10:11:59.430344  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.430353  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:59.430359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:59.430417  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:59.457827  285837 cri.go:89] found id: ""
	I1213 10:11:59.457854  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.457862  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:59.457868  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:59.457924  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:59.483234  285837 cri.go:89] found id: ""
	I1213 10:11:59.483261  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.483270  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:59.483277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:59.483337  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:59.508270  285837 cri.go:89] found id: ""
	I1213 10:11:59.508296  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.508314  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:59.508322  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:59.508379  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:59.532819  285837 cri.go:89] found id: ""
	I1213 10:11:59.532842  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.532851  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:59.532857  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:59.532913  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:59.556482  285837 cri.go:89] found id: ""
	I1213 10:11:59.556508  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.556517  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:59.556527  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:59.556540  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:59.611281  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:59.611315  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:59.624666  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:59.624694  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:59.690085  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:59.681680   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.682124   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.683696   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.684084   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.685716   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:59.681680   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.682124   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.683696   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.684084   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.685716   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:59.690108  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:59.690122  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:59.715666  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:59.715703  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:02.245206  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:02.256067  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:02.256147  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:02.280777  285837 cri.go:89] found id: ""
	I1213 10:12:02.280801  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.280809  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:02.280821  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:02.280885  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:02.305877  285837 cri.go:89] found id: ""
	I1213 10:12:02.305905  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.305914  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:02.305920  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:02.305988  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:02.330860  285837 cri.go:89] found id: ""
	I1213 10:12:02.330886  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.330894  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:02.330900  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:02.330965  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:02.356613  285837 cri.go:89] found id: ""
	I1213 10:12:02.356649  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.356659  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:02.356665  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:02.356746  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:02.388158  285837 cri.go:89] found id: ""
	I1213 10:12:02.388181  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.388190  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:02.388196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:02.388256  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:02.415431  285837 cri.go:89] found id: ""
	I1213 10:12:02.415454  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.415462  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:02.415468  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:02.415538  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:02.442554  285837 cri.go:89] found id: ""
	I1213 10:12:02.442580  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.442589  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:02.442595  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:02.442654  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:02.468134  285837 cri.go:89] found id: ""
	I1213 10:12:02.468159  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.468167  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:02.468177  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:02.468188  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:02.526799  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:02.526832  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:02.542508  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:02.542533  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:02.616614  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:02.606681   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.607231   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.609505   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.610854   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.611410   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:02.606681   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.607231   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.609505   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.610854   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.611410   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:02.616637  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:02.616650  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:02.641382  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:02.641415  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:05.169197  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:05.179948  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:05.180017  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:05.205082  285837 cri.go:89] found id: ""
	I1213 10:12:05.205105  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.205113  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:05.205119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:05.205176  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:05.234272  285837 cri.go:89] found id: ""
	I1213 10:12:05.234295  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.234305  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:05.234311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:05.234369  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:05.259024  285837 cri.go:89] found id: ""
	I1213 10:12:05.259047  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.259055  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:05.259062  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:05.259120  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:05.287223  285837 cri.go:89] found id: ""
	I1213 10:12:05.287249  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.287257  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:05.287264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:05.287323  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:05.311741  285837 cri.go:89] found id: ""
	I1213 10:12:05.311831  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.311859  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:05.311904  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:05.312016  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:05.337137  285837 cri.go:89] found id: ""
	I1213 10:12:05.337161  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.337170  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:05.337176  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:05.337232  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:05.361938  285837 cri.go:89] found id: ""
	I1213 10:12:05.361967  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.361976  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:05.361982  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:05.362063  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:05.387423  285837 cri.go:89] found id: ""
	I1213 10:12:05.387460  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.387469  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:05.387478  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:05.387489  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:05.446385  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:05.446423  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:05.460052  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:05.460075  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:05.534925  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:05.526612   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.527034   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.528674   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.529348   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.530958   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:05.526612   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.527034   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.528674   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.529348   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.530958   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:05.534954  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:05.534969  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:05.561237  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:05.561278  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:08.090523  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:08.103723  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:08.103793  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:08.130437  285837 cri.go:89] found id: ""
	I1213 10:12:08.130464  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.130473  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:08.130479  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:08.130536  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:08.158259  285837 cri.go:89] found id: ""
	I1213 10:12:08.158286  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.158295  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:08.158301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:08.158359  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:08.183457  285837 cri.go:89] found id: ""
	I1213 10:12:08.183484  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.183493  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:08.183499  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:08.183589  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:08.207480  285837 cri.go:89] found id: ""
	I1213 10:12:08.207507  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.207613  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:08.207620  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:08.207681  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:08.231959  285837 cri.go:89] found id: ""
	I1213 10:12:08.232037  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.232053  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:08.232061  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:08.232131  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:08.255921  285837 cri.go:89] found id: ""
	I1213 10:12:08.255986  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.256003  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:08.256010  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:08.256074  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:08.280187  285837 cri.go:89] found id: ""
	I1213 10:12:08.280254  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.280269  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:08.280276  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:08.280332  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:08.308900  285837 cri.go:89] found id: ""
	I1213 10:12:08.308974  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.308997  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:08.309014  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:08.309029  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:08.322959  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:08.322986  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:08.387674  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:08.378725   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.379335   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.380972   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.381487   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.383076   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:08.378725   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.379335   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.380972   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.381487   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.383076   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:08.387701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:08.387715  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:08.413378  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:08.413414  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:08.444856  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:08.444888  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:11.000292  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:11.012216  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:11.012287  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:11.063803  285837 cri.go:89] found id: ""
	I1213 10:12:11.063829  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.063838  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:11.063845  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:11.063910  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:11.103072  285837 cri.go:89] found id: ""
	I1213 10:12:11.103099  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.103109  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:11.103115  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:11.103171  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:11.138581  285837 cri.go:89] found id: ""
	I1213 10:12:11.138606  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.138614  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:11.138631  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:11.138686  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:11.163663  285837 cri.go:89] found id: ""
	I1213 10:12:11.163735  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.163760  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:11.163779  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:11.163862  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:11.188635  285837 cri.go:89] found id: ""
	I1213 10:12:11.188701  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.188716  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:11.188722  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:11.188779  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:11.217597  285837 cri.go:89] found id: ""
	I1213 10:12:11.217620  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.217628  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:11.217634  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:11.217690  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:11.241986  285837 cri.go:89] found id: ""
	I1213 10:12:11.242009  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.242017  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:11.242023  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:11.242078  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:11.266556  285837 cri.go:89] found id: ""
	I1213 10:12:11.266578  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.266586  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:11.266596  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:11.266607  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:11.298567  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:11.298592  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:11.354117  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:11.354151  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:11.367112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:11.367187  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:11.430754  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:11.422695   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.423276   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.424837   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.425327   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.426819   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:11.422695   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.423276   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.424837   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.425327   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.426819   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:11.430832  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:11.430859  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:13.957251  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:13.968979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:13.969058  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:13.994304  285837 cri.go:89] found id: ""
	I1213 10:12:13.994326  285837 logs.go:282] 0 containers: []
	W1213 10:12:13.994334  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:13.994341  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:13.994396  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:14.032552  285837 cri.go:89] found id: ""
	I1213 10:12:14.032584  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.032593  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:14.032600  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:14.032663  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:14.104797  285837 cri.go:89] found id: ""
	I1213 10:12:14.104823  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.104833  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:14.104839  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:14.104901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:14.130796  285837 cri.go:89] found id: ""
	I1213 10:12:14.130821  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.130831  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:14.130837  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:14.130892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:14.157587  285837 cri.go:89] found id: ""
	I1213 10:12:14.157616  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.157625  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:14.157631  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:14.157689  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:14.183166  285837 cri.go:89] found id: ""
	I1213 10:12:14.183191  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.183199  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:14.183205  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:14.183271  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:14.207844  285837 cri.go:89] found id: ""
	I1213 10:12:14.207871  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.207880  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:14.207886  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:14.207943  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:14.232398  285837 cri.go:89] found id: ""
	I1213 10:12:14.232420  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.232429  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:14.232438  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:14.232450  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:14.263838  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:14.263869  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:14.322835  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:14.322870  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:14.336577  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:14.336609  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:14.404961  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:14.396535   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.397107   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.398835   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.399389   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.400964   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:14.396535   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.397107   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.398835   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.399389   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.400964   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:14.405007  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:14.405047  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:16.930423  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:16.941126  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:16.941197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:16.968990  285837 cri.go:89] found id: ""
	I1213 10:12:16.969013  285837 logs.go:282] 0 containers: []
	W1213 10:12:16.969023  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:16.969029  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:16.969093  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:16.994277  285837 cri.go:89] found id: ""
	I1213 10:12:16.994298  285837 logs.go:282] 0 containers: []
	W1213 10:12:16.994307  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:16.994319  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:16.994374  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:17.052160  285837 cri.go:89] found id: ""
	I1213 10:12:17.052187  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.052196  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:17.052202  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:17.052260  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:17.112056  285837 cri.go:89] found id: ""
	I1213 10:12:17.112122  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.112136  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:17.112142  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:17.112201  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:17.137264  285837 cri.go:89] found id: ""
	I1213 10:12:17.137287  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.137295  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:17.137301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:17.137356  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:17.161759  285837 cri.go:89] found id: ""
	I1213 10:12:17.161780  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.161802  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:17.161808  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:17.161864  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:17.187256  285837 cri.go:89] found id: ""
	I1213 10:12:17.187288  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.187296  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:17.187302  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:17.187372  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:17.213316  285837 cri.go:89] found id: ""
	I1213 10:12:17.213380  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.213400  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:17.213413  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:17.213424  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:17.241644  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:17.241674  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:17.298584  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:17.298617  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:17.313303  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:17.313331  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:17.387719  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:17.378154   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.379329   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.380946   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.381429   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.382999   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:17.378154   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.379329   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.380946   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.381429   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.382999   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:17.387742  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:17.387755  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:19.919282  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:19.929646  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:19.929711  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:19.961717  285837 cri.go:89] found id: ""
	I1213 10:12:19.961739  285837 logs.go:282] 0 containers: []
	W1213 10:12:19.961748  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:19.961754  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:19.961811  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:19.986281  285837 cri.go:89] found id: ""
	I1213 10:12:19.986306  285837 logs.go:282] 0 containers: []
	W1213 10:12:19.986315  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:19.986321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:19.986375  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:20.035442  285837 cri.go:89] found id: ""
	I1213 10:12:20.035468  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.035478  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:20.035484  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:20.035574  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:20.086605  285837 cri.go:89] found id: ""
	I1213 10:12:20.086627  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.086635  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:20.086642  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:20.086698  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:20.121043  285837 cri.go:89] found id: ""
	I1213 10:12:20.121065  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.121073  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:20.121079  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:20.121136  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:20.148016  285837 cri.go:89] found id: ""
	I1213 10:12:20.148083  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.148105  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:20.148124  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:20.148209  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:20.175168  285837 cri.go:89] found id: ""
	I1213 10:12:20.175234  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.175257  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:20.175276  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:20.175363  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:20.206568  285837 cri.go:89] found id: ""
	I1213 10:12:20.206590  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.206599  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:20.206608  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:20.206619  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:20.234244  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:20.234308  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:20.290937  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:20.290972  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:20.304498  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:20.304527  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:20.367763  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:20.358899   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.359439   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361173   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361714   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.363335   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:20.358899   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.359439   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361173   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361714   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.363335   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:20.367830  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:20.367849  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:22.894711  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:22.905901  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:22.905969  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:22.936437  285837 cri.go:89] found id: ""
	I1213 10:12:22.936460  285837 logs.go:282] 0 containers: []
	W1213 10:12:22.936468  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:22.936474  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:22.936533  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:22.961367  285837 cri.go:89] found id: ""
	I1213 10:12:22.961390  285837 logs.go:282] 0 containers: []
	W1213 10:12:22.961416  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:22.961425  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:22.961484  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:22.984924  285837 cri.go:89] found id: ""
	I1213 10:12:22.984949  285837 logs.go:282] 0 containers: []
	W1213 10:12:22.984958  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:22.984964  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:22.985046  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:23.012110  285837 cri.go:89] found id: ""
	I1213 10:12:23.012175  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.012191  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:23.012198  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:23.012258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:23.053789  285837 cri.go:89] found id: ""
	I1213 10:12:23.053816  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.053825  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:23.053831  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:23.053888  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:23.102082  285837 cri.go:89] found id: ""
	I1213 10:12:23.102104  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.102112  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:23.102118  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:23.102173  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:23.139793  285837 cri.go:89] found id: ""
	I1213 10:12:23.139820  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.139830  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:23.139836  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:23.139892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:23.163400  285837 cri.go:89] found id: ""
	I1213 10:12:23.163426  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.163436  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:23.163451  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:23.163464  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:23.227709  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:23.227744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:23.241604  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:23.241631  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:23.305636  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:23.297583   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.298334   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.299957   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.300270   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.301534   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:23.297583   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.298334   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.299957   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.300270   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.301534   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:23.305670  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:23.305683  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:23.331847  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:23.331879  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:25.858551  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:25.871752  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:25.871822  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:25.897476  285837 cri.go:89] found id: ""
	I1213 10:12:25.897527  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.897536  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:25.897543  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:25.897600  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:25.925782  285837 cri.go:89] found id: ""
	I1213 10:12:25.925807  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.925817  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:25.925823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:25.925906  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:25.949723  285837 cri.go:89] found id: ""
	I1213 10:12:25.949750  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.949760  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:25.949766  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:25.949842  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:25.973991  285837 cri.go:89] found id: ""
	I1213 10:12:25.974016  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.974025  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:25.974032  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:25.974107  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:26.001033  285837 cri.go:89] found id: ""
	I1213 10:12:26.001056  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.001064  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:26.001070  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:26.001144  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:26.077273  285837 cri.go:89] found id: ""
	I1213 10:12:26.077300  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.077309  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:26.077316  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:26.077397  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:26.122203  285837 cri.go:89] found id: ""
	I1213 10:12:26.122230  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.122240  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:26.122246  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:26.122346  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:26.147712  285837 cri.go:89] found id: ""
	I1213 10:12:26.147736  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.147745  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:26.147781  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:26.147799  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:26.203487  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:26.203528  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:26.217213  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:26.217246  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:26.284727  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:26.276312   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.277260   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.278746   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.279123   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.280769   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:26.276312   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.277260   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.278746   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.279123   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.280769   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:26.284751  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:26.284763  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:26.312716  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:26.312773  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:28.841875  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:28.852491  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:28.852562  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:28.881629  285837 cri.go:89] found id: ""
	I1213 10:12:28.881653  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.881662  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:28.881669  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:28.881728  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:28.906270  285837 cri.go:89] found id: ""
	I1213 10:12:28.906296  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.906306  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:28.906312  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:28.906370  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:28.931578  285837 cri.go:89] found id: ""
	I1213 10:12:28.931599  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.931607  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:28.931612  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:28.931666  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:28.957311  285837 cri.go:89] found id: ""
	I1213 10:12:28.957334  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.957343  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:28.957349  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:28.957406  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:28.981753  285837 cri.go:89] found id: ""
	I1213 10:12:28.981778  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.981787  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:28.981794  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:28.981849  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:29.006917  285837 cri.go:89] found id: ""
	I1213 10:12:29.006945  285837 logs.go:282] 0 containers: []
	W1213 10:12:29.006955  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:29.006962  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:29.007029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:29.066909  285837 cri.go:89] found id: ""
	I1213 10:12:29.066935  285837 logs.go:282] 0 containers: []
	W1213 10:12:29.066944  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:29.066950  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:29.067008  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:29.105599  285837 cri.go:89] found id: ""
	I1213 10:12:29.105625  285837 logs.go:282] 0 containers: []
	W1213 10:12:29.105633  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:29.105642  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:29.105652  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:29.130961  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:29.131003  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:29.157785  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:29.157819  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:29.213436  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:29.213472  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:29.227454  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:29.227485  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:29.298087  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:29.289850   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.290315   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.291761   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.292182   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.293636   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:29.289850   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.290315   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.291761   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.292182   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.293636   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:31.798509  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:31.809145  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:31.809221  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:31.833247  285837 cri.go:89] found id: ""
	I1213 10:12:31.833272  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.833281  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:31.833290  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:31.833348  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:31.861756  285837 cri.go:89] found id: ""
	I1213 10:12:31.861779  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.861789  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:31.861795  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:31.861851  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:31.885473  285837 cri.go:89] found id: ""
	I1213 10:12:31.885496  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.885506  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:31.885512  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:31.885566  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:31.908602  285837 cri.go:89] found id: ""
	I1213 10:12:31.908626  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.908634  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:31.908640  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:31.908695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:31.933964  285837 cri.go:89] found id: ""
	I1213 10:12:31.933990  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.933999  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:31.934005  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:31.934063  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:31.962393  285837 cri.go:89] found id: ""
	I1213 10:12:31.962416  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.962424  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:31.962431  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:31.962490  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:31.986650  285837 cri.go:89] found id: ""
	I1213 10:12:31.986676  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.986685  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:31.986692  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:31.986749  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:32.017192  285837 cri.go:89] found id: ""
	I1213 10:12:32.017220  285837 logs.go:282] 0 containers: []
	W1213 10:12:32.017229  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:32.017239  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:32.017252  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:32.035285  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:32.035316  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:32.145875  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:32.134854   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.135586   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.137228   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.140039   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.141704   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:32.134854   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.135586   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.137228   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.140039   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.141704   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:32.145896  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:32.145909  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:32.172371  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:32.172409  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:32.202803  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:32.202833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:34.759246  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:34.770746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:34.770823  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:34.798561  285837 cri.go:89] found id: ""
	I1213 10:12:34.798585  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.798594  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:34.798601  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:34.798664  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:34.824521  285837 cri.go:89] found id: ""
	I1213 10:12:34.824544  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.824553  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:34.824559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:34.824616  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:34.848643  285837 cri.go:89] found id: ""
	I1213 10:12:34.848670  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.848680  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:34.848687  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:34.848746  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:34.874242  285837 cri.go:89] found id: ""
	I1213 10:12:34.874263  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.874271  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:34.874277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:34.874331  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:34.898270  285837 cri.go:89] found id: ""
	I1213 10:12:34.898298  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.898308  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:34.898314  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:34.898374  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:34.922469  285837 cri.go:89] found id: ""
	I1213 10:12:34.922492  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.922502  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:34.922508  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:34.922565  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:34.949223  285837 cri.go:89] found id: ""
	I1213 10:12:34.949250  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.949259  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:34.949266  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:34.949320  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:34.977644  285837 cri.go:89] found id: ""
	I1213 10:12:34.977675  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.977685  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:34.977696  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:34.977707  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:35.038624  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:35.038662  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:35.079394  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:35.079475  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:35.160019  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:35.151819   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.152538   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154163   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154458   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.155960   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:35.151819   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.152538   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154163   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154458   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.155960   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:35.160066  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:35.160078  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:35.186026  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:35.186058  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:37.713450  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:37.724509  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:37.724585  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:37.748173  285837 cri.go:89] found id: ""
	I1213 10:12:37.748197  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.748206  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:37.748213  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:37.748274  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:37.772262  285837 cri.go:89] found id: ""
	I1213 10:12:37.772285  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.772294  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:37.772312  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:37.772371  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:37.797053  285837 cri.go:89] found id: ""
	I1213 10:12:37.797077  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.797086  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:37.797093  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:37.797151  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:37.821445  285837 cri.go:89] found id: ""
	I1213 10:12:37.821468  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.821477  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:37.821484  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:37.821538  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:37.848175  285837 cri.go:89] found id: ""
	I1213 10:12:37.848199  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.848208  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:37.848214  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:37.848272  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:37.882751  285837 cri.go:89] found id: ""
	I1213 10:12:37.882774  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.882784  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:37.882789  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:37.882847  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:37.907236  285837 cri.go:89] found id: ""
	I1213 10:12:37.907262  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.907271  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:37.907277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:37.907334  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:37.931030  285837 cri.go:89] found id: ""
	I1213 10:12:37.931053  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.931061  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:37.931070  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:37.931082  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:37.944201  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:37.944228  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:38.014013  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:38.001007   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.001921   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.006866   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.007610   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.009335   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:38.001007   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.001921   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.006866   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.007610   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.009335   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:38.014037  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:38.014051  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:38.050241  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:38.050336  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:38.123205  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:38.123240  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:40.686197  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:40.696710  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:40.696797  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:40.726004  285837 cri.go:89] found id: ""
	I1213 10:12:40.726031  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.726040  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:40.726046  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:40.726104  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:40.751506  285837 cri.go:89] found id: ""
	I1213 10:12:40.751558  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.751567  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:40.751573  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:40.751637  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:40.777206  285837 cri.go:89] found id: ""
	I1213 10:12:40.777232  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.777241  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:40.777247  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:40.777307  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:40.806234  285837 cri.go:89] found id: ""
	I1213 10:12:40.806256  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.806264  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:40.806270  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:40.806326  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:40.835873  285837 cri.go:89] found id: ""
	I1213 10:12:40.835898  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.835907  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:40.835913  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:40.835969  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:40.861792  285837 cri.go:89] found id: ""
	I1213 10:12:40.861821  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.861830  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:40.861836  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:40.861897  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:40.887384  285837 cri.go:89] found id: ""
	I1213 10:12:40.887409  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.887418  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:40.887424  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:40.887482  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:40.918474  285837 cri.go:89] found id: ""
	I1213 10:12:40.918499  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.918508  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:40.918518  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:40.918529  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:40.974634  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:40.974669  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:40.988450  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:40.988481  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:41.102570  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:41.091923   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.092616   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094183   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094723   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.096608   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:41.091923   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.092616   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094183   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094723   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.096608   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:41.102639  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:41.102664  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:41.132124  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:41.132159  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:43.660524  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:43.671119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:43.671190  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:43.696313  285837 cri.go:89] found id: ""
	I1213 10:12:43.696343  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.696356  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:43.696364  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:43.696422  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:43.720831  285837 cri.go:89] found id: ""
	I1213 10:12:43.720856  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.720865  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:43.720871  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:43.720930  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:43.745280  285837 cri.go:89] found id: ""
	I1213 10:12:43.745305  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.745314  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:43.745321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:43.745382  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:43.771809  285837 cri.go:89] found id: ""
	I1213 10:12:43.771832  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.771842  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:43.771848  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:43.771919  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:43.795691  285837 cri.go:89] found id: ""
	I1213 10:12:43.795715  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.795725  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:43.795731  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:43.795789  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:43.821222  285837 cri.go:89] found id: ""
	I1213 10:12:43.821246  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.821254  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:43.821261  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:43.821316  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:43.849405  285837 cri.go:89] found id: ""
	I1213 10:12:43.849428  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.849437  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:43.849450  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:43.849515  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:43.874124  285837 cri.go:89] found id: ""
	I1213 10:12:43.874150  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.874159  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:43.874167  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:43.874178  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:43.938106  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:43.929845   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.930320   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932022   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932327   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.933807   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:43.929845   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.930320   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932022   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932327   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.933807   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:43.938129  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:43.938141  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:43.963803  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:43.963838  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:43.994003  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:43.994030  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:44.069701  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:44.069786  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:46.587357  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:46.597851  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:46.597931  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:46.622018  285837 cri.go:89] found id: ""
	I1213 10:12:46.622044  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.622054  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:46.622060  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:46.622119  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:46.647494  285837 cri.go:89] found id: ""
	I1213 10:12:46.647537  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.647547  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:46.647553  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:46.647612  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:46.673199  285837 cri.go:89] found id: ""
	I1213 10:12:46.673223  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.673237  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:46.673243  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:46.673302  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:46.702715  285837 cri.go:89] found id: ""
	I1213 10:12:46.702777  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.702799  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:46.702818  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:46.702888  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:46.732013  285837 cri.go:89] found id: ""
	I1213 10:12:46.732036  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.732044  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:46.732049  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:46.732111  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:46.755882  285837 cri.go:89] found id: ""
	I1213 10:12:46.755907  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.755925  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:46.755933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:46.755993  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:46.780993  285837 cri.go:89] found id: ""
	I1213 10:12:46.781016  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.781025  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:46.781031  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:46.781094  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:46.806179  285837 cri.go:89] found id: ""
	I1213 10:12:46.806255  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.806280  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:46.806305  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:46.806342  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:46.863518  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:46.863553  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:46.877399  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:46.877428  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:46.946626  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:46.939032   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.939507   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941182   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941602   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.942800   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:46.939032   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.939507   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941182   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941602   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.942800   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:46.946696  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:46.946739  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:46.972274  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:46.972306  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:49.510021  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:49.520415  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:49.520489  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:49.544492  285837 cri.go:89] found id: ""
	I1213 10:12:49.544515  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.544524  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:49.544531  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:49.544595  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:49.574538  285837 cri.go:89] found id: ""
	I1213 10:12:49.574564  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.574573  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:49.574593  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:49.574659  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:49.603237  285837 cri.go:89] found id: ""
	I1213 10:12:49.603267  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.603277  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:49.603283  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:49.603339  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:49.627482  285837 cri.go:89] found id: ""
	I1213 10:12:49.627508  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.627547  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:49.627555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:49.627635  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:49.652503  285837 cri.go:89] found id: ""
	I1213 10:12:49.652532  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.652541  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:49.652547  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:49.652620  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:49.677443  285837 cri.go:89] found id: ""
	I1213 10:12:49.677474  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.677483  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:49.677490  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:49.677551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:49.702698  285837 cri.go:89] found id: ""
	I1213 10:12:49.702723  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.702733  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:49.702750  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:49.702813  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:49.731706  285837 cri.go:89] found id: ""
	I1213 10:12:49.731727  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.731735  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:49.731750  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:49.731762  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:49.787702  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:49.787741  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:49.801570  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:49.801602  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:49.870136  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:49.861042   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.862455   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.863332   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.864338   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.865026   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:49.861042   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.862455   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.863332   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.864338   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.865026   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:49.870158  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:49.870171  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:49.896174  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:49.896211  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:52.425030  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:52.438702  285837 out.go:203] 
	W1213 10:12:52.441528  285837 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 10:12:52.441562  285837 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 10:12:52.441572  285837 out.go:285] * Related issues:
	* Related issues:
	W1213 10:12:52.441583  285837 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1213 10:12:52.441596  285837 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1213 10:12:52.444462  285837 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-987495
helpers_test.go:244: (dbg) docker inspect newest-cni-987495:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac",
	        "Created": "2025-12-13T09:56:44.68064601Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285966,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:06:44.630226292Z",
	            "FinishedAt": "2025-12-13T10:06:43.28882954Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/hosts",
	        "LogPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac-json.log",
	        "Name": "/newest-cni-987495",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-987495:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-987495",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac",
	                "LowerDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-987495",
	                "Source": "/var/lib/docker/volumes/newest-cni-987495/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-987495",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-987495",
	                "name.minikube.sigs.k8s.io": "newest-cni-987495",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5075c185fe763e8b4bf25c5fa6e0906d897dd0a6aa9fa09a4f6785fde91f40b",
	            "SandboxKey": "/var/run/docker/netns/d5075c185fe7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-987495": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:1b:64:66:e5:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1cc05b29a6a537694a06e8a33e1431f6867104db51c8eb4299d9f9f07c01c4",
	                    "EndpointID": "e82ad5225efe9fbd3a246c4b71f89967b2a2d9edc684052e26b72ce55599a589",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-987495",
	                        "5d45a23b08cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-987495 -n newest-cni-987495
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-987495 -n newest-cni-987495: exit status 2 (387.202043ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-987495 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-987495 logs -n 25: (1.567004796s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ start   │ -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:52 UTC │ 13 Dec 25 09:53 UTC │
	│ image   │ embed-certs-238987 image list --format=json                                                                                                                                                                                                                │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ pause   │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ unpause │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-544967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ stop    │ -p default-k8s-diff-port-544967 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-544967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-544967 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-328069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:00 UTC │                     │
	│ stop    │ -p no-preload-328069 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │ 13 Dec 25 10:02 UTC │
	│ addons  │ enable dashboard -p no-preload-328069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │ 13 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-987495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:05 UTC │                     │
	│ stop    │ -p newest-cni-987495 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:06 UTC │ 13 Dec 25 10:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-987495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:06 UTC │ 13 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:06 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:06:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:06:44.358606  285837 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:06:44.358774  285837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:06:44.358804  285837 out.go:374] Setting ErrFile to fd 2...
	I1213 10:06:44.358810  285837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:06:44.359110  285837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 10:06:44.359584  285837 out.go:368] Setting JSON to false
	I1213 10:06:44.360505  285837 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6557,"bootTime":1765613848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:06:44.360574  285837 start.go:143] virtualization:  
	I1213 10:06:44.365480  285837 out.go:179] * [newest-cni-987495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:06:44.368718  285837 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:06:44.368777  285837 notify.go:221] Checking for updates...
	I1213 10:06:44.374649  285837 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:06:44.377632  285837 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:44.380625  285837 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 10:06:44.383607  285837 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:06:44.386498  285837 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:06:44.389949  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:44.390563  285837 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:06:44.426169  285837 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:06:44.426412  285837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:06:44.479541  285837 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:06:44.469338758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:06:44.479654  285837 docker.go:319] overlay module found
	I1213 10:06:44.482815  285837 out.go:179] * Using the docker driver based on existing profile
	I1213 10:06:44.485692  285837 start.go:309] selected driver: docker
	I1213 10:06:44.485711  285837 start.go:927] validating driver "docker" against &{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:44.485823  285837 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:06:44.486552  285837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:06:44.545256  285837 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:06:44.535101087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:06:44.545615  285837 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 10:06:44.545650  285837 cni.go:84] Creating CNI manager for ""
	I1213 10:06:44.545706  285837 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:06:44.545747  285837 start.go:353] cluster config:
	{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:44.548958  285837 out.go:179] * Starting "newest-cni-987495" primary control-plane node in "newest-cni-987495" cluster
	I1213 10:06:44.551733  285837 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:06:44.554789  285837 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:06:44.557547  285837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:06:44.557592  285837 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:06:44.557602  285837 cache.go:65] Caching tarball of preloaded images
	I1213 10:06:44.557636  285837 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:06:44.557693  285837 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:06:44.557703  285837 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:06:44.557824  285837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 10:06:44.577619  285837 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:06:44.577644  285837 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:06:44.577660  285837 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:06:44.577696  285837 start.go:360] acquireMachinesLock for newest-cni-987495: {Name:mk0b05e51288e33ec02181c33b2cba54230603c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:06:44.577756  285837 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "newest-cni-987495"
	I1213 10:06:44.577778  285837 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:06:44.577787  285837 fix.go:54] fixHost starting: 
	I1213 10:06:44.578057  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:44.595484  285837 fix.go:112] recreateIfNeeded on newest-cni-987495: state=Stopped err=<nil>
	W1213 10:06:44.595545  285837 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 10:06:43.023116  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:45.025351  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:44.598729  285837 out.go:252] * Restarting existing docker container for "newest-cni-987495" ...
	I1213 10:06:44.598811  285837 cli_runner.go:164] Run: docker start newest-cni-987495
	I1213 10:06:44.855461  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:44.880412  285837 kic.go:430] container "newest-cni-987495" state is running.
	I1213 10:06:44.880797  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:44.909497  285837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 10:06:44.909726  285837 machine.go:94] provisionDockerMachine start ...
	I1213 10:06:44.909783  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:44.930622  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:44.931232  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:44.931291  285837 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:06:44.932041  285837 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:06:48.091507  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 10:06:48.091560  285837 ubuntu.go:182] provisioning hostname "newest-cni-987495"
	I1213 10:06:48.091625  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.110757  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:48.111074  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:48.111090  285837 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-987495 && echo "newest-cni-987495" | sudo tee /etc/hostname
	I1213 10:06:48.273955  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 10:06:48.274083  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.291615  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:48.291933  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:48.291961  285837 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-987495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-987495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-987495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:06:48.443806  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:06:48.443836  285837 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 10:06:48.443909  285837 ubuntu.go:190] setting up certificates
	I1213 10:06:48.443925  285837 provision.go:84] configureAuth start
	I1213 10:06:48.444014  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:48.461447  285837 provision.go:143] copyHostCerts
	I1213 10:06:48.461529  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 10:06:48.461544  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 10:06:48.461626  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 10:06:48.461731  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 10:06:48.461744  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 10:06:48.461773  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 10:06:48.461831  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 10:06:48.461840  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 10:06:48.461873  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 10:06:48.461929  285837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.newest-cni-987495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-987495]
	I1213 10:06:48.588588  285837 provision.go:177] copyRemoteCerts
	I1213 10:06:48.588677  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:06:48.588742  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.606370  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:48.711093  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:06:48.728291  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:06:48.746238  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:06:48.763841  285837 provision.go:87] duration metric: took 319.890818ms to configureAuth
	I1213 10:06:48.763919  285837 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:06:48.764158  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:48.764172  285837 machine.go:97] duration metric: took 3.854438499s to provisionDockerMachine
	I1213 10:06:48.764181  285837 start.go:293] postStartSetup for "newest-cni-987495" (driver="docker")
	I1213 10:06:48.764199  285837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:06:48.764250  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:06:48.764297  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.781656  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:48.887571  285837 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:06:48.891032  285837 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:06:48.891062  285837 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:06:48.891074  285837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 10:06:48.891128  285837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 10:06:48.891231  285837 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 10:06:48.891336  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:06:48.898692  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:06:48.916401  285837 start.go:296] duration metric: took 152.205033ms for postStartSetup
	I1213 10:06:48.916505  285837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:06:48.916556  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.933960  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.036570  285837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:06:49.041484  285837 fix.go:56] duration metric: took 4.463690867s for fixHost
	I1213 10:06:49.041511  285837 start.go:83] releasing machines lock for "newest-cni-987495", held for 4.463742733s
	I1213 10:06:49.041581  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:49.058404  285837 ssh_runner.go:195] Run: cat /version.json
	I1213 10:06:49.058462  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:49.058542  285837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:06:49.058607  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:49.080342  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.081196  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.272327  285837 ssh_runner.go:195] Run: systemctl --version
	I1213 10:06:49.280206  285837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:06:49.285584  285837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:06:49.285649  285837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:06:49.294944  285837 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:06:49.295018  285837 start.go:496] detecting cgroup driver to use...
	I1213 10:06:49.295073  285837 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:06:49.295155  285837 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:06:49.313555  285837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:06:49.330142  285837 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:06:49.330250  285837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:06:49.347394  285837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:06:49.361017  285837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:06:49.470304  285837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:06:49.578011  285837 docker.go:234] disabling docker service ...
	I1213 10:06:49.578102  285837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:06:49.592856  285837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:06:49.605575  285837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:06:49.713643  285837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:06:49.824293  285837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:06:49.838298  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:06:49.852989  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:06:49.861909  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:06:49.870661  285837 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:06:49.870784  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:06:49.879670  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:06:49.888429  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:06:49.896909  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:06:49.905618  285837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:06:49.913163  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:06:49.921632  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:06:49.930294  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:06:49.939291  285837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:06:49.947067  285837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:06:49.954313  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:50.072981  285837 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:06:50.196904  285837 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:06:50.196994  285837 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:06:50.200903  285837 start.go:564] Will wait 60s for crictl version
	I1213 10:06:50.201048  285837 ssh_runner.go:195] Run: which crictl
	I1213 10:06:50.204672  285837 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:06:50.230484  285837 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:06:50.230603  285837 ssh_runner.go:195] Run: containerd --version
	I1213 10:06:50.250716  285837 ssh_runner.go:195] Run: containerd --version
	I1213 10:06:50.275578  285837 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:06:50.278424  285837 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:06:50.294657  285837 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 10:06:50.298351  285837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:06:50.310828  285837 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 10:06:50.313572  285837 kubeadm.go:884] updating cluster {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:06:50.313727  285837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:06:50.313810  285837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:06:50.342567  285837 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:06:50.342593  285837 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:06:50.342654  285837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:06:50.371166  285837 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:06:50.371189  285837 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:06:50.371197  285837 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 10:06:50.371299  285837 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-987495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:06:50.371378  285837 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:06:50.396100  285837 cni.go:84] Creating CNI manager for ""
	I1213 10:06:50.396123  285837 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:06:50.396165  285837 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 10:06:50.396196  285837 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-987495 NodeName:newest-cni-987495 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:06:50.396373  285837 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-987495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:06:50.396459  285837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:06:50.404329  285837 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:06:50.404398  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:06:50.411842  285837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:06:50.424649  285837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:06:50.442140  285837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 10:06:50.455154  285837 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:06:50.459006  285837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:06:50.468675  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:50.580293  285837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:06:50.596864  285837 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495 for IP: 192.168.85.2
	I1213 10:06:50.596887  285837 certs.go:195] generating shared ca certs ...
	I1213 10:06:50.596905  285837 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:50.597091  285837 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 10:06:50.597205  285837 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 10:06:50.597223  285837 certs.go:257] generating profile certs ...
	I1213 10:06:50.597356  285837 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key
	I1213 10:06:50.597436  285837 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e
	I1213 10:06:50.597506  285837 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key
	I1213 10:06:50.597658  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 10:06:50.597722  285837 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 10:06:50.597739  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:06:50.597785  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:06:50.597830  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:06:50.597864  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 10:06:50.597929  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:06:50.598639  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:06:50.618438  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:06:50.636641  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:06:50.654754  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:06:50.674470  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:06:50.692387  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:06:50.709515  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:06:50.726691  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:06:50.744316  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 10:06:50.762153  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:06:50.779459  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 10:06:50.799850  285837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:06:50.814739  285837 ssh_runner.go:195] Run: openssl version
	I1213 10:06:50.821667  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.831484  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 10:06:50.840240  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.844034  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.844100  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.885521  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:06:50.892992  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.900259  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:06:50.907747  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.911335  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.911425  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.952315  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:06:50.959952  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.967099  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 10:06:50.974300  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.977776  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.977836  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 10:06:51.019185  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:06:51.026990  285837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:06:51.031010  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:06:51.084662  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:06:51.132673  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:06:51.177864  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:06:51.221006  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:06:51.268266  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:06:51.309760  285837 kubeadm.go:401] StartCluster: {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:51.309854  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:06:51.309920  285837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:06:51.336480  285837 cri.go:89] found id: ""
	I1213 10:06:51.336643  285837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:06:51.344873  285837 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:06:51.344892  285837 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:06:51.344971  285837 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:06:51.352443  285837 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:06:51.353090  285837 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-987495" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:51.353376  285837 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-987495" cluster setting kubeconfig missing "newest-cni-987495" context setting]
	I1213 10:06:51.353816  285837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.355217  285837 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:06:51.362937  285837 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 10:06:51.363006  285837 kubeadm.go:602] duration metric: took 18.107502ms to restartPrimaryControlPlane
	I1213 10:06:51.363022  285837 kubeadm.go:403] duration metric: took 53.271819ms to StartCluster
	I1213 10:06:51.363041  285837 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.363105  285837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:51.363987  285837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.364220  285837 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:06:51.364499  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:51.364635  285837 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:06:51.364717  285837 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-987495"
	I1213 10:06:51.364742  285837 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-987495"
	I1213 10:06:51.364767  285837 addons.go:70] Setting default-storageclass=true in profile "newest-cni-987495"
	I1213 10:06:51.364819  285837 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-987495"
	I1213 10:06:51.364774  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.365187  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.365396  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.364741  285837 addons.go:70] Setting dashboard=true in profile "newest-cni-987495"
	I1213 10:06:51.365978  285837 addons.go:239] Setting addon dashboard=true in "newest-cni-987495"
	W1213 10:06:51.365987  285837 addons.go:248] addon dashboard should already be in state true
	I1213 10:06:51.366008  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.366429  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.370287  285837 out.go:179] * Verifying Kubernetes components...
	I1213 10:06:51.373474  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:51.400526  285837 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 10:06:51.404501  285837 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 10:06:51.407418  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 10:06:51.407443  285837 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 10:06:51.407622  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.417800  285837 addons.go:239] Setting addon default-storageclass=true in "newest-cni-987495"
	I1213 10:06:51.417844  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.418251  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.419100  285837 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1213 10:06:47.522700  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:49.522769  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:51.523631  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:51.423855  285837 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:51.423880  285837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:06:51.423942  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.466299  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.483641  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.486041  285837 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:51.486059  285837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:06:51.486115  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.509387  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.646942  285837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:06:51.680839  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 10:06:51.680862  285837 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 10:06:51.697914  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 10:06:51.697938  285837 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 10:06:51.704518  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:51.713551  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:51.723021  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 10:06:51.723048  285837 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 10:06:51.778125  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 10:06:51.778149  285837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 10:06:51.806697  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 10:06:51.806719  285837 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 10:06:51.819170  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 10:06:51.819253  285837 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 10:06:51.832331  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 10:06:51.832355  285837 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 10:06:51.845336  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 10:06:51.845362  285837 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 10:06:51.859132  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:06:51.859155  285837 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 10:06:51.872954  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:06:52.275964  285837 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:06:52.276037  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:52.276137  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276165  285837 retry.go:31] will retry after 226.70351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.276226  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276237  285837 retry.go:31] will retry after 265.695109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.276427  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276440  285837 retry.go:31] will retry after 287.765057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.503091  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:52.542820  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:52.565377  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:52.583674  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.583713  285837 retry.go:31] will retry after 384.757306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.624746  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.624777  285837 retry.go:31] will retry after 404.862658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.656044  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.656099  285837 retry.go:31] will retry after 520.967054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.776249  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:52.969189  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:53.030822  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:53.051878  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.051909  285837 retry.go:31] will retry after 644.635232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:53.146104  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.146138  285837 retry.go:31] will retry after 713.617137ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.177278  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:53.244074  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.244105  285837 retry.go:31] will retry after 478.208285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.276451  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:53.697474  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:53.722935  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:53.763188  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.763282  285837 retry.go:31] will retry after 791.669242ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.776509  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:53.833584  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.833619  285837 retry.go:31] will retry after 1.106769375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.860665  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:53.922352  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.922382  285837 retry.go:31] will retry after 439.211444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.277094  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:54.023458  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:56.023636  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:54.362407  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:54.425741  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.425772  285837 retry.go:31] will retry after 994.413015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.555979  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:06:54.643378  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.643410  285837 retry.go:31] will retry after 1.597794919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.776687  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:54.941378  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:55.010057  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.010106  285837 retry.go:31] will retry after 1.576792043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.276187  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:55.420648  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:55.480113  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.480142  285837 retry.go:31] will retry after 2.26666641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.776309  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:56.242125  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:56.276562  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:56.308877  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.308912  285837 retry.go:31] will retry after 2.70852063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.587192  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:56.650840  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.650869  285837 retry.go:31] will retry after 1.746680045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.776898  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:57.276239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:57.747110  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:57.776721  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:57.808824  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:57.808896  285837 retry.go:31] will retry after 3.338979851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.276183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:58.397695  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:58.460604  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.460637  285837 retry.go:31] will retry after 1.622921048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.776104  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:59.018609  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:06:59.122924  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:59.122951  285837 retry.go:31] will retry after 3.647698418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:59.276167  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:58.523051  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:01.022919  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:59.776456  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:00.084206  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:00.276658  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:00.330895  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:00.330933  285837 retry.go:31] will retry after 4.848981129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:00.776778  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:01.148539  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:01.211860  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:01.211894  285837 retry.go:31] will retry after 4.161832977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:01.277039  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:01.776560  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:02.276839  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:02.771686  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:07:02.776972  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:02.901393  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:02.901424  285837 retry.go:31] will retry after 5.549971544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:03.276936  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:03.776830  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:04.276724  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:03.522677  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:05.522772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:04.777224  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:05.180067  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:07:05.247404  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.247439  285837 retry.go:31] will retry after 4.476695877s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.276547  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:05.374229  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:05.433759  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.433787  285837 retry.go:31] will retry after 4.37892264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.776166  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:06.276368  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:06.776601  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:07.276152  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:07.777077  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:08.277179  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:08.451866  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:08.512981  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:08.513027  285837 retry.go:31] will retry after 9.372893328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:08.776155  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:09.276770  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:08.022655  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:10.022768  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:09.724392  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:09.776822  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:09.785453  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.785488  285837 retry.go:31] will retry after 5.955337388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.813514  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:09.876563  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.876594  285837 retry.go:31] will retry after 6.585328869s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:10.276122  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:10.776152  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:11.276997  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:11.776748  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:12.276867  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:12.777071  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:13.276725  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:13.776915  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:14.276832  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:12.022989  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:14.522670  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:14.777034  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:15.277144  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:15.741108  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:15.776723  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:15.809076  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:15.809111  285837 retry.go:31] will retry after 8.411412429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.276706  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:16.462334  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:16.524133  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.524164  285837 retry.go:31] will retry after 16.275248342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.776613  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.276278  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.776240  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.886523  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:17.954531  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:17.954562  285837 retry.go:31] will retry after 10.907278655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:18.276175  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:18.776243  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:19.276722  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:17.022862  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:19.522763  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:21.522806  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:19.776239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:20.276570  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:20.776244  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:21.277087  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:21.776477  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:22.276171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:22.777167  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:23.276540  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:23.776720  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:24.220799  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:24.276447  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:24.283800  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:24.283834  285837 retry.go:31] will retry after 19.949258949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:07:24.022701  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:26.023564  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:24.776758  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:25.276211  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:25.776711  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:26.276227  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:26.776716  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:27.276229  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:27.776183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.276941  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.776226  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.862833  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:28.922616  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:28.922648  285837 retry.go:31] will retry after 8.454738907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:29.277083  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:28.522731  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:30.522938  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:29.776182  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:30.277060  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:30.776835  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:31.276746  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:31.776414  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.276209  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.776715  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.799816  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:32.901801  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:32.901845  285837 retry.go:31] will retry after 14.65260505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:33.276216  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:33.776222  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:34.276756  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:33.022800  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:35.522770  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:34.776764  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:35.277073  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:35.776211  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:36.276331  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:36.776510  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:37.276183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:37.378406  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:37.440661  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:37.440691  285837 retry.go:31] will retry after 16.048870296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:37.776113  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:38.276917  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:38.776758  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:39.276296  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:38.022809  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:40.522836  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:39.776735  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:40.276749  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:40.777116  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:41.277172  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:41.776857  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:42.277141  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:42.776207  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:43.276171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:43.776690  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:44.233363  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:44.276911  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:44.294603  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:44.294641  285837 retry.go:31] will retry after 45.098120748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:07:42.523034  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:45.022823  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:44.776742  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:45.276466  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:45.776133  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:46.280870  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:46.776232  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:47.276987  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:47.554729  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:47.616803  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:47.616837  285837 retry.go:31] will retry after 38.754607023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:47.776168  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:48.276203  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:48.776412  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:49.276189  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:47.022949  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:49.522878  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:49.776177  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:50.277157  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:50.776201  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:51.276146  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:51.776144  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:51.776242  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:51.804204  285837 cri.go:89] found id: ""
	I1213 10:07:51.804236  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.804246  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:51.804253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:51.804314  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:51.829636  285837 cri.go:89] found id: ""
	I1213 10:07:51.829669  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.829679  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:51.829685  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:51.829745  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:51.857487  285837 cri.go:89] found id: ""
	I1213 10:07:51.857510  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.857519  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:51.857525  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:51.857590  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:51.881972  285837 cri.go:89] found id: ""
	I1213 10:07:51.881998  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.882006  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:51.882012  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:51.882072  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:51.906050  285837 cri.go:89] found id: ""
	I1213 10:07:51.906074  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.906083  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:51.906089  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:51.906149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:51.930678  285837 cri.go:89] found id: ""
	I1213 10:07:51.930700  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.930708  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:51.930715  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:51.930774  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:51.955590  285837 cri.go:89] found id: ""
	I1213 10:07:51.955661  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.955683  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:51.955701  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:51.955786  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:51.979349  285837 cri.go:89] found id: ""
	I1213 10:07:51.979374  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.979382  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:51.979391  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:51.979405  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:52.048255  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:52.039592    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.040290    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.041824    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.042312    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.043873    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:52.039592    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.040290    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.041824    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.042312    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.043873    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:52.048276  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:52.048290  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:52.074149  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:52.074187  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:07:52.103113  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:52.103142  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:52.161764  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:52.161797  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:53.489865  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:53.547700  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:53.547730  285837 retry.go:31] will retry after 48.398435893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:07:52.022714  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:54.023780  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:56.522671  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:54.676402  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:54.686866  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:54.686943  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:54.716493  285837 cri.go:89] found id: ""
	I1213 10:07:54.716514  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.716523  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:54.716529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:54.716584  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:54.740751  285837 cri.go:89] found id: ""
	I1213 10:07:54.740778  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.740787  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:54.740797  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:54.740854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:54.763680  285837 cri.go:89] found id: ""
	I1213 10:07:54.763703  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.763712  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:54.763717  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:54.763773  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:54.787504  285837 cri.go:89] found id: ""
	I1213 10:07:54.787556  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.787564  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:54.787570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:54.787626  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:54.812200  285837 cri.go:89] found id: ""
	I1213 10:07:54.812222  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.812231  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:54.812253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:54.812314  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:54.841586  285837 cri.go:89] found id: ""
	I1213 10:07:54.841613  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.841623  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:54.841629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:54.841687  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:54.865631  285837 cri.go:89] found id: ""
	I1213 10:07:54.865658  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.865667  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:54.865673  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:54.865731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:54.889746  285837 cri.go:89] found id: ""
	I1213 10:07:54.889773  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.889782  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:54.889792  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:54.889803  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:54.945120  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:54.945155  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:54.958121  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:54.958145  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:55.027564  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:55.017674    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.018407    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.020402    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.021114    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.022964    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:55.017674    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.018407    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.020402    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.021114    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.022964    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:55.027592  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:55.027605  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:55.053752  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:55.053788  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:07:57.584821  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:57.597676  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:57.597774  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:57.621661  285837 cri.go:89] found id: ""
	I1213 10:07:57.621684  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.621692  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:57.621699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:57.621756  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:57.649006  285837 cri.go:89] found id: ""
	I1213 10:07:57.649028  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.649036  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:57.649042  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:57.649107  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:57.672839  285837 cri.go:89] found id: ""
	I1213 10:07:57.672866  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.672875  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:57.672881  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:57.672937  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:57.697343  285837 cri.go:89] found id: ""
	I1213 10:07:57.697366  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.697375  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:57.697381  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:57.697447  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:57.722254  285837 cri.go:89] found id: ""
	I1213 10:07:57.722276  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.722284  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:57.722291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:57.722346  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:57.746125  285837 cri.go:89] found id: ""
	I1213 10:07:57.746150  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.746159  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:57.746165  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:57.746220  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:57.770612  285837 cri.go:89] found id: ""
	I1213 10:07:57.770679  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.770702  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:57.770720  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:57.770799  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:57.795253  285837 cri.go:89] found id: ""
	I1213 10:07:57.795277  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.795285  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:57.795294  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:57.795320  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:57.852923  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:57.852957  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:57.866320  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:57.866350  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:57.930573  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:57.921741    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.922259    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.923923    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.925244    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.926323    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:57.921741    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.922259    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.923923    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.925244    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.926323    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:57.930596  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:57.930609  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:57.955644  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:57.955687  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:07:58.522782  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:00.523382  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:00.485873  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:00.498933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:00.499039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:00.588348  285837 cri.go:89] found id: ""
	I1213 10:08:00.588373  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.588383  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:00.588403  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:00.588480  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:00.632508  285837 cri.go:89] found id: ""
	I1213 10:08:00.632581  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.632604  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:00.632623  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:00.632721  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:00.659204  285837 cri.go:89] found id: ""
	I1213 10:08:00.659231  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.659240  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:00.659246  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:00.659303  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:00.685440  285837 cri.go:89] found id: ""
	I1213 10:08:00.685468  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.685477  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:00.685492  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:00.685551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:00.710692  285837 cri.go:89] found id: ""
	I1213 10:08:00.710719  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.710728  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:00.710734  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:00.710791  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:00.736661  285837 cri.go:89] found id: ""
	I1213 10:08:00.736683  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.736692  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:00.736698  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:00.736766  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:00.761591  285837 cri.go:89] found id: ""
	I1213 10:08:00.761617  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.761627  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:00.761634  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:00.761695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:00.786438  285837 cri.go:89] found id: ""
	I1213 10:08:00.786465  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.786474  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:00.786484  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:00.786494  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:00.842291  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:00.842327  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:00.855993  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:00.856020  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:00.925840  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:00.916441    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918096    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918752    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920335    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920931    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:00.916441    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918096    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918752    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920335    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920931    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:00.925874  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:00.925888  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:00.953015  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:00.953064  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:03.486172  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:03.496591  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:03.496662  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:03.534940  285837 cri.go:89] found id: ""
	I1213 10:08:03.534964  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.534973  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:03.534979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:03.535038  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:03.598662  285837 cri.go:89] found id: ""
	I1213 10:08:03.598688  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.598698  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:03.598704  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:03.598766  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:03.624092  285837 cri.go:89] found id: ""
	I1213 10:08:03.624114  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.624122  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:03.624129  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:03.624188  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:03.649153  285837 cri.go:89] found id: ""
	I1213 10:08:03.649176  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.649185  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:03.649196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:03.649255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:03.673710  285837 cri.go:89] found id: ""
	I1213 10:08:03.673778  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.673802  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:03.673822  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:03.673901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:03.698952  285837 cri.go:89] found id: ""
	I1213 10:08:03.698978  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.699004  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:03.699011  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:03.699076  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:03.723499  285837 cri.go:89] found id: ""
	I1213 10:08:03.723548  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.723558  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:03.723563  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:03.723626  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:03.748795  285837 cri.go:89] found id: ""
	I1213 10:08:03.748819  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.748828  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:03.748837  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:03.748848  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:03.812342  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:03.803601    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.804143    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.805763    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.806297    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.807979    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:03.803601    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.804143    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.805763    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.806297    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.807979    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:03.812368  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:03.812388  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:03.841166  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:03.841206  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:03.871116  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:03.871146  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:03.927807  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:03.927839  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 10:08:03.022774  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:05.522704  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:06.441780  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:06.452228  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:06.452309  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:06.476347  285837 cri.go:89] found id: ""
	I1213 10:08:06.476370  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.476378  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:06.476384  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:06.476441  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:06.504937  285837 cri.go:89] found id: ""
	I1213 10:08:06.504961  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.504970  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:06.504977  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:06.505037  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:06.553519  285837 cri.go:89] found id: ""
	I1213 10:08:06.553545  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.553553  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:06.553559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:06.553619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:06.608223  285837 cri.go:89] found id: ""
	I1213 10:08:06.608249  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.608258  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:06.608264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:06.608322  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:06.639732  285837 cri.go:89] found id: ""
	I1213 10:08:06.639801  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.639816  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:06.639823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:06.639886  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:06.668074  285837 cri.go:89] found id: ""
	I1213 10:08:06.668099  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.668108  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:06.668114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:06.668190  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:06.691695  285837 cri.go:89] found id: ""
	I1213 10:08:06.691720  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.691729  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:06.691735  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:06.691801  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:06.717093  285837 cri.go:89] found id: ""
	I1213 10:08:06.717120  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.717129  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:06.717140  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:06.717152  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:06.773552  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:06.773584  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:06.787064  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:06.787090  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:06.854164  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:06.846017    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.846675    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848150    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848641    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.850183    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:06.846017    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.846675    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848150    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848641    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.850183    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:06.854189  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:06.854202  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:06.879668  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:06.879702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:08:08.022653  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:10.022701  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:09.406742  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:09.417411  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:09.417484  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:09.442113  285837 cri.go:89] found id: ""
	I1213 10:08:09.442138  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.442147  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:09.442153  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:09.442218  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:09.466316  285837 cri.go:89] found id: ""
	I1213 10:08:09.466342  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.466351  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:09.466357  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:09.466415  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:09.491678  285837 cri.go:89] found id: ""
	I1213 10:08:09.491703  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.491712  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:09.491718  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:09.491776  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:09.515316  285837 cri.go:89] found id: ""
	I1213 10:08:09.515337  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.515346  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:09.515352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:09.515410  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:09.567095  285837 cri.go:89] found id: ""
	I1213 10:08:09.567116  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.567125  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:09.567131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:09.567197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:09.616045  285837 cri.go:89] found id: ""
	I1213 10:08:09.616067  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.616076  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:09.616082  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:09.616142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:09.640449  285837 cri.go:89] found id: ""
	I1213 10:08:09.640479  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.640488  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:09.640495  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:09.640555  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:09.664888  285837 cri.go:89] found id: ""
	I1213 10:08:09.664912  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.664921  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:09.664930  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:09.664941  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:09.691077  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:09.691106  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:09.747246  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:09.747280  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:09.761112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:09.761140  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:09.830659  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:09.821800    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.822624    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.824372    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.825020    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.826720    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:09.821800    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.822624    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.824372    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.825020    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.826720    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:09.830682  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:09.830695  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:12.356184  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:12.368119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:12.368203  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:12.394250  285837 cri.go:89] found id: ""
	I1213 10:08:12.394279  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.394291  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:12.394298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:12.394365  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:12.419062  285837 cri.go:89] found id: ""
	I1213 10:08:12.419086  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.419095  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:12.419102  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:12.419159  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:12.446274  285837 cri.go:89] found id: ""
	I1213 10:08:12.446300  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.446308  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:12.446315  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:12.446371  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:12.469875  285837 cri.go:89] found id: ""
	I1213 10:08:12.469901  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.469910  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:12.469917  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:12.469977  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:12.495108  285837 cri.go:89] found id: ""
	I1213 10:08:12.495136  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.495145  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:12.495152  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:12.495207  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:12.521169  285837 cri.go:89] found id: ""
	I1213 10:08:12.521190  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.521198  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:12.521204  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:12.521258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:12.557387  285837 cri.go:89] found id: ""
	I1213 10:08:12.557412  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.557421  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:12.557427  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:12.557483  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:12.586888  285837 cri.go:89] found id: ""
	I1213 10:08:12.586913  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.586922  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:12.586931  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:12.586942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:12.654328  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:12.654361  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:12.668044  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:12.668071  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:12.737226  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:12.728433    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.729118    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.730862    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.731560    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.733263    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:12.728433    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.729118    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.730862    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.731560    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.733263    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:12.737248  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:12.737261  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:12.762749  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:12.762783  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:08:12.022956  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:14.522703  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:15.289142  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:15.301958  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:15.302029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:15.330317  285837 cri.go:89] found id: ""
	I1213 10:08:15.330344  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.330353  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:15.330359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:15.330423  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:15.358090  285837 cri.go:89] found id: ""
	I1213 10:08:15.358115  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.358124  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:15.358130  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:15.358187  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:15.382832  285837 cri.go:89] found id: ""
	I1213 10:08:15.382862  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.382871  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:15.382877  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:15.382940  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:15.409515  285837 cri.go:89] found id: ""
	I1213 10:08:15.409539  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.409549  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:15.409555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:15.409613  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:15.433885  285837 cri.go:89] found id: ""
	I1213 10:08:15.433911  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.433920  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:15.433926  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:15.433989  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:15.458618  285837 cri.go:89] found id: ""
	I1213 10:08:15.458643  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.458653  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:15.458659  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:15.458715  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:15.482592  285837 cri.go:89] found id: ""
	I1213 10:08:15.482616  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.482625  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:15.482635  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:15.482693  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:15.511125  285837 cri.go:89] found id: ""
	I1213 10:08:15.511153  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.511163  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:15.511172  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:15.511183  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:15.584797  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:15.584833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:15.598725  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:15.598752  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:15.681678  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:15.672277    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.673044    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.674516    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.675095    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.676657    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:15.672277    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.673044    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.674516    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.675095    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.676657    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:15.681701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:15.681714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:15.707610  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:15.707646  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:18.235184  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:18.246689  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:18.246762  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:18.271129  285837 cri.go:89] found id: ""
	I1213 10:08:18.271155  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.271165  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:18.271172  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:18.271240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:18.296110  285837 cri.go:89] found id: ""
	I1213 10:08:18.296135  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.296144  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:18.296150  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:18.296208  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:18.321267  285837 cri.go:89] found id: ""
	I1213 10:08:18.321290  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.321304  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:18.321311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:18.321368  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:18.349274  285837 cri.go:89] found id: ""
	I1213 10:08:18.349300  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.349309  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:18.349315  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:18.349414  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:18.373235  285837 cri.go:89] found id: ""
	I1213 10:08:18.373310  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.373325  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:18.373335  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:18.373395  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:18.397157  285837 cri.go:89] found id: ""
	I1213 10:08:18.397181  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.397190  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:18.397196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:18.397283  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:18.421144  285837 cri.go:89] found id: ""
	I1213 10:08:18.421168  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.421177  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:18.421184  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:18.421243  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:18.449567  285837 cri.go:89] found id: ""
	I1213 10:08:18.449643  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.449659  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:18.449670  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:18.449682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:18.505803  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:18.505836  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:18.520075  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:18.520099  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:18.640681  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:18.632737    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.633402    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.634626    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.635073    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.636570    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:18.632737    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.633402    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.634626    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.635073    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.636570    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:18.640706  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:18.640720  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:18.666166  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:18.666201  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:08:17.022884  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:19.522795  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:20.031934  279351 node_ready.go:38] duration metric: took 6m0.009733727s for node "no-preload-328069" to be "Ready" ...
	I1213 10:08:20.035146  279351 out.go:203] 
	W1213 10:08:20.038039  279351 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:08:20.038064  279351 out.go:285] * 
	W1213 10:08:20.040199  279351 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:08:20.043110  279351 out.go:203] 
	I1213 10:08:21.195745  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:21.206020  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:21.206084  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:21.246086  285837 cri.go:89] found id: ""
	I1213 10:08:21.246106  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.246115  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:21.246122  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:21.246181  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:21.273446  285837 cri.go:89] found id: ""
	I1213 10:08:21.273469  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.273477  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:21.273483  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:21.273543  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:21.312010  285837 cri.go:89] found id: ""
	I1213 10:08:21.312031  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.312040  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:21.312046  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:21.312104  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:21.357158  285837 cri.go:89] found id: ""
	I1213 10:08:21.357177  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.357185  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:21.357192  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:21.357248  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:21.398112  285837 cri.go:89] found id: ""
	I1213 10:08:21.398135  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.398143  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:21.398149  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:21.398205  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:21.447244  285837 cri.go:89] found id: ""
	I1213 10:08:21.447268  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.447276  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:21.447283  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:21.447347  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:21.495558  285837 cri.go:89] found id: ""
	I1213 10:08:21.495581  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.495589  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:21.495595  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:21.495652  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:21.555224  285837 cri.go:89] found id: ""
	I1213 10:08:21.555248  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.555257  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:21.555270  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:21.555281  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:21.627890  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:21.627922  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:21.674689  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:21.674714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:21.747238  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:21.747267  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:21.763785  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:21.763813  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:21.844164  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:21.834312    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.835883    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.836677    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838349    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838777    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:21.834312    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.835883    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.836677    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838349    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838777    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:24.345832  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:24.356414  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:24.356487  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:24.381314  285837 cri.go:89] found id: ""
	I1213 10:08:24.381340  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.381349  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:24.381356  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:24.381418  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:24.405581  285837 cri.go:89] found id: ""
	I1213 10:08:24.405606  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.405614  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:24.405621  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:24.405679  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:24.429873  285837 cri.go:89] found id: ""
	I1213 10:08:24.429895  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.429904  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:24.429911  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:24.429971  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:24.457573  285837 cri.go:89] found id: ""
	I1213 10:08:24.457600  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.457609  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:24.457616  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:24.457674  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:24.481838  285837 cri.go:89] found id: ""
	I1213 10:08:24.481865  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.481874  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:24.481880  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:24.481937  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:24.507009  285837 cri.go:89] found id: ""
	I1213 10:08:24.507034  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.507043  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:24.507049  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:24.507105  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:24.550665  285837 cri.go:89] found id: ""
	I1213 10:08:24.550687  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.550695  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:24.550702  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:24.550757  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:24.584765  285837 cri.go:89] found id: ""
	I1213 10:08:24.584787  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.584805  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:24.584815  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:24.584828  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:24.652249  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:24.643336    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.644158    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.645848    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.646388    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.648047    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:24.643336    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.644158    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.645848    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.646388    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.648047    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:24.652271  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:24.652285  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:24.677128  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:24.677161  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:24.705609  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:24.705635  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:24.761364  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:24.761399  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:26.371661  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:08:26.432065  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:08:26.432188  285837 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:08:27.285248  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:27.295647  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:27.295723  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:27.320532  285837 cri.go:89] found id: ""
	I1213 10:08:27.320555  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.320564  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:27.320570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:27.320628  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:27.344722  285837 cri.go:89] found id: ""
	I1213 10:08:27.344748  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.344758  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:27.344764  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:27.344852  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:27.370726  285837 cri.go:89] found id: ""
	I1213 10:08:27.370751  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.370760  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:27.370766  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:27.370849  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:27.394557  285837 cri.go:89] found id: ""
	I1213 10:08:27.394583  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.394617  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:27.394628  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:27.394703  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:27.418575  285837 cri.go:89] found id: ""
	I1213 10:08:27.418601  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.418610  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:27.418616  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:27.418673  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:27.444932  285837 cri.go:89] found id: ""
	I1213 10:08:27.444953  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.444962  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:27.444968  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:27.445029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:27.468135  285837 cri.go:89] found id: ""
	I1213 10:08:27.468213  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.468237  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:27.468256  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:27.468330  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:27.493054  285837 cri.go:89] found id: ""
	I1213 10:08:27.493079  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.493089  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:27.493098  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:27.493126  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:27.555066  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:27.555141  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:27.572569  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:27.572644  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:27.641611  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:27.634328    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.634722    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.635977    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.636303    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.637740    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:27.634328    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.634722    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.635977    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.636303    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.637740    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:27.641682  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:27.641704  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:27.667653  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:27.667690  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:29.393883  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:08:29.454286  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:08:29.454393  285837 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:08:30.208961  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:30.219829  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:30.219950  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:30.248442  285837 cri.go:89] found id: ""
	I1213 10:08:30.248471  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.248480  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:30.248486  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:30.248569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:30.273935  285837 cri.go:89] found id: ""
	I1213 10:08:30.273964  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.273973  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:30.273979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:30.274067  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:30.299229  285837 cri.go:89] found id: ""
	I1213 10:08:30.299256  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.299265  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:30.299271  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:30.299328  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:30.327770  285837 cri.go:89] found id: ""
	I1213 10:08:30.327792  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.327801  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:30.327807  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:30.327863  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:30.352796  285837 cri.go:89] found id: ""
	I1213 10:08:30.352851  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.352861  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:30.352867  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:30.352928  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:30.376505  285837 cri.go:89] found id: ""
	I1213 10:08:30.376530  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.376539  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:30.376546  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:30.376646  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:30.400512  285837 cri.go:89] found id: ""
	I1213 10:08:30.400536  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.400545  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:30.400551  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:30.400611  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:30.425139  285837 cri.go:89] found id: ""
	I1213 10:08:30.425162  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.425171  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:30.425181  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:30.425192  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:30.454686  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:30.454713  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:30.509531  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:30.509568  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:30.527699  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:30.527727  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:30.597883  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:30.589727    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.590173    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.591765    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.592345    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.594029    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:30.589727    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.590173    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.591765    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.592345    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.594029    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:30.597907  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:30.597920  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:33.123638  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:33.134229  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:33.134302  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:33.161169  285837 cri.go:89] found id: ""
	I1213 10:08:33.161201  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.161210  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:33.161218  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:33.161278  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:33.189591  285837 cri.go:89] found id: ""
	I1213 10:08:33.189614  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.189623  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:33.189629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:33.189691  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:33.213288  285837 cri.go:89] found id: ""
	I1213 10:08:33.213315  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.213325  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:33.213331  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:33.213388  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:33.237186  285837 cri.go:89] found id: ""
	I1213 10:08:33.237214  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.237223  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:33.237230  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:33.237291  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:33.265589  285837 cri.go:89] found id: ""
	I1213 10:08:33.265615  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.265623  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:33.265629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:33.265687  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:33.289791  285837 cri.go:89] found id: ""
	I1213 10:08:33.289862  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.289884  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:33.289902  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:33.289986  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:33.314058  285837 cri.go:89] found id: ""
	I1213 10:08:33.314085  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.314094  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:33.314099  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:33.314170  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:33.338463  285837 cri.go:89] found id: ""
	I1213 10:08:33.338490  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.338499  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:33.338509  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:33.338521  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:33.393919  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:33.393953  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:33.407152  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:33.407179  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:33.470838  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:33.463012    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.463421    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465099    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465564    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.466988    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:33.463012    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.463421    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465099    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465564    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.466988    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:33.470862  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:33.470875  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:33.495641  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:33.495672  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:36.035663  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:36.047578  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:36.047649  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:36.076122  285837 cri.go:89] found id: ""
	I1213 10:08:36.076145  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.076154  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:36.076160  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:36.076236  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:36.105524  285837 cri.go:89] found id: ""
	I1213 10:08:36.105554  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.105564  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:36.105570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:36.105629  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:36.134491  285837 cri.go:89] found id: ""
	I1213 10:08:36.134565  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.134587  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:36.134607  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:36.134695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:36.159376  285837 cri.go:89] found id: ""
	I1213 10:08:36.159449  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.159471  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:36.159489  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:36.159608  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:36.185490  285837 cri.go:89] found id: ""
	I1213 10:08:36.185565  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.185590  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:36.185604  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:36.185676  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:36.219394  285837 cri.go:89] found id: ""
	I1213 10:08:36.219422  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.219431  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:36.219438  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:36.219494  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:36.243333  285837 cri.go:89] found id: ""
	I1213 10:08:36.243357  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.243367  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:36.243373  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:36.243435  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:36.267160  285837 cri.go:89] found id: ""
	I1213 10:08:36.267187  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.267196  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:36.267206  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:36.267218  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:36.280345  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:36.280375  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:36.343250  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:36.335308    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.335892    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337458    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337934    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.339453    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:36.335308    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.335892    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337458    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337934    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.339453    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:36.343272  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:36.343284  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:36.368575  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:36.368610  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:36.395546  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:36.395573  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:38.955916  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:38.966663  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:38.966732  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:38.991698  285837 cri.go:89] found id: ""
	I1213 10:08:38.991722  285837 logs.go:282] 0 containers: []
	W1213 10:08:38.991730  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:38.991737  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:38.991795  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:39.029472  285837 cri.go:89] found id: ""
	I1213 10:08:39.029501  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.029510  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:39.029515  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:39.029610  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:39.058052  285837 cri.go:89] found id: ""
	I1213 10:08:39.058082  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.058097  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:39.058104  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:39.058165  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:39.086309  285837 cri.go:89] found id: ""
	I1213 10:08:39.086331  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.086339  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:39.086345  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:39.086407  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:39.113392  285837 cri.go:89] found id: ""
	I1213 10:08:39.113420  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.113430  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:39.113436  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:39.113497  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:39.138083  285837 cri.go:89] found id: ""
	I1213 10:08:39.138109  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.138118  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:39.138125  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:39.138182  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:39.162132  285837 cri.go:89] found id: ""
	I1213 10:08:39.162160  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.162170  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:39.162176  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:39.162239  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:39.190634  285837 cri.go:89] found id: ""
	I1213 10:08:39.190661  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.190670  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:39.190679  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:39.190691  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:39.215694  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:39.215727  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:39.246161  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:39.246189  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:39.305962  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:39.305996  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:39.319717  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:39.319744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:39.382189  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:39.374352    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.374762    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376328    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376646    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.378097    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:39.374352    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.374762    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376328    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376646    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.378097    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:41.883328  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:41.894154  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:41.894228  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:41.921476  285837 cri.go:89] found id: ""
	I1213 10:08:41.921500  285837 logs.go:282] 0 containers: []
	W1213 10:08:41.921509  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:41.921515  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:41.921573  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:41.945812  285837 cri.go:89] found id: ""
	I1213 10:08:41.945835  285837 logs.go:282] 0 containers: []
	W1213 10:08:41.945843  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:41.945849  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:41.945912  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:41.946276  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:08:41.977805  285837 cri.go:89] found id: ""
	I1213 10:08:41.977840  285837 logs.go:282] 0 containers: []
	W1213 10:08:41.977849  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:41.977855  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:41.977923  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1213 10:08:42.037880  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:08:42.037998  285837 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:08:42.038333  285837 cri.go:89] found id: ""
	I1213 10:08:42.038351  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.038357  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:42.038364  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:42.038439  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:42.041174  285837 out.go:179] * Enabled addons: 
	I1213 10:08:42.044041  285837 addons.go:530] duration metric: took 1m50.679416537s for enable addons: enabled=[]
	I1213 10:08:42.069124  285837 cri.go:89] found id: ""
	I1213 10:08:42.069158  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.069173  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:42.069181  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:42.069277  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:42.114076  285837 cri.go:89] found id: ""
	I1213 10:08:42.114106  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.114119  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:42.114129  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:42.114201  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:42.143501  285837 cri.go:89] found id: ""
	I1213 10:08:42.143577  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.143587  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:42.143594  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:42.143665  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:42.174231  285837 cri.go:89] found id: ""
	I1213 10:08:42.174258  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.174267  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:42.174278  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:42.174291  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:42.209465  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:42.209500  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:42.270663  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:42.270702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:42.286732  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:42.286769  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:42.356785  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:42.348392    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.349131    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.350759    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.351267    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.352901    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:42.348392    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.349131    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.350759    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.351267    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.352901    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:42.356809  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:42.356822  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:44.882858  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:44.893320  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:44.893392  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:44.918585  285837 cri.go:89] found id: ""
	I1213 10:08:44.918612  285837 logs.go:282] 0 containers: []
	W1213 10:08:44.918621  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:44.918628  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:44.918686  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:44.943719  285837 cri.go:89] found id: ""
	I1213 10:08:44.943746  285837 logs.go:282] 0 containers: []
	W1213 10:08:44.943755  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:44.943762  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:44.943822  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:44.968177  285837 cri.go:89] found id: ""
	I1213 10:08:44.968204  285837 logs.go:282] 0 containers: []
	W1213 10:08:44.968213  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:44.968219  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:44.968273  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:45.012025  285837 cri.go:89] found id: ""
	I1213 10:08:45.012052  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.012062  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:45.012069  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:45.012140  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:45.059717  285837 cri.go:89] found id: ""
	I1213 10:08:45.059815  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.059841  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:45.059864  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:45.059985  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:45.146429  285837 cri.go:89] found id: ""
	I1213 10:08:45.146507  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.146534  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:45.146585  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:45.146680  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:45.192650  285837 cri.go:89] found id: ""
	I1213 10:08:45.192683  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.192695  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:45.192704  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:45.192786  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:45.240936  285837 cri.go:89] found id: ""
	I1213 10:08:45.241266  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.241306  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:45.241344  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:45.241423  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:45.280178  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:45.280250  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:45.343980  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:45.344023  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:45.357799  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:45.357833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:45.421366  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:45.413373    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.413837    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415405    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415812    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.417291    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:45.413373    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.413837    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415405    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415812    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.417291    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:45.421390  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:45.421403  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:47.952239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:47.963745  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:47.963816  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:47.989230  285837 cri.go:89] found id: ""
	I1213 10:08:47.989253  285837 logs.go:282] 0 containers: []
	W1213 10:08:47.989262  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:47.989288  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:47.989360  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:48.018062  285837 cri.go:89] found id: ""
	I1213 10:08:48.018087  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.018096  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:48.018102  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:48.018165  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:48.049042  285837 cri.go:89] found id: ""
	I1213 10:08:48.049068  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.049078  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:48.049084  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:48.049147  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:48.077924  285837 cri.go:89] found id: ""
	I1213 10:08:48.077946  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.077955  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:48.077965  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:48.078023  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:48.106258  285837 cri.go:89] found id: ""
	I1213 10:08:48.106284  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.106292  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:48.106298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:48.106355  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:48.130836  285837 cri.go:89] found id: ""
	I1213 10:08:48.130861  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.130869  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:48.130883  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:48.130945  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:48.157446  285837 cri.go:89] found id: ""
	I1213 10:08:48.157470  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.157479  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:48.157485  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:48.157543  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:48.182657  285837 cri.go:89] found id: ""
	I1213 10:08:48.182687  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.182697  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:48.182707  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:48.182719  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:48.196607  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:48.196685  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:48.261824  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:48.252959    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.253768    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.255478    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.256212    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.257905    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:48.252959    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.253768    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.255478    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.256212    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.257905    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:48.261895  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:48.261914  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:48.287393  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:48.287436  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:48.318617  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:48.318647  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:50.875656  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:50.886169  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:50.886240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:50.910775  285837 cri.go:89] found id: ""
	I1213 10:08:50.910801  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.910810  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:50.910817  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:50.910874  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:50.936159  285837 cri.go:89] found id: ""
	I1213 10:08:50.936185  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.936194  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:50.936200  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:50.936262  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:50.960845  285837 cri.go:89] found id: ""
	I1213 10:08:50.960879  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.960888  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:50.960895  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:50.960956  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:50.989232  285837 cri.go:89] found id: ""
	I1213 10:08:50.989262  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.989271  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:50.989277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:50.989361  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:51.017908  285837 cri.go:89] found id: ""
	I1213 10:08:51.017936  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.017944  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:51.017950  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:51.018012  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:51.062320  285837 cri.go:89] found id: ""
	I1213 10:08:51.062355  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.062363  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:51.062369  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:51.062436  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:51.091004  285837 cri.go:89] found id: ""
	I1213 10:08:51.091038  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.091047  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:51.091053  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:51.091118  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:51.116510  285837 cri.go:89] found id: ""
	I1213 10:08:51.116543  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.116552  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:51.116561  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:51.116574  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:51.147665  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:51.147690  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:51.203425  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:51.203457  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:51.216632  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:51.216657  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:51.278157  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:51.270057    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.270725    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272274    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272754    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.274265    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:51.270057    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.270725    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272274    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272754    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.274265    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:51.278181  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:51.278195  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:53.804075  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:53.815823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:53.815894  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:53.841157  285837 cri.go:89] found id: ""
	I1213 10:08:53.841180  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.841189  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:53.841195  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:53.841251  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:53.869816  285837 cri.go:89] found id: ""
	I1213 10:08:53.869840  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.869850  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:53.869856  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:53.869916  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:53.893754  285837 cri.go:89] found id: ""
	I1213 10:08:53.893781  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.893789  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:53.893796  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:53.893856  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:53.917859  285837 cri.go:89] found id: ""
	I1213 10:08:53.917881  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.917890  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:53.917896  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:53.917957  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:53.941859  285837 cri.go:89] found id: ""
	I1213 10:08:53.941886  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.941895  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:53.941902  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:53.941964  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:53.969296  285837 cri.go:89] found id: ""
	I1213 10:08:53.969320  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.969329  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:53.969335  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:53.969392  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:53.993419  285837 cri.go:89] found id: ""
	I1213 10:08:53.993448  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.993458  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:53.993464  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:53.993520  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:54.026047  285837 cri.go:89] found id: ""
	I1213 10:08:54.026074  285837 logs.go:282] 0 containers: []
	W1213 10:08:54.026084  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:54.026094  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:54.026106  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:54.042132  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:54.042160  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:54.121343  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:54.112495    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.113229    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115109    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115795    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.117272    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:54.112495    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.113229    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115109    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115795    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.117272    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:54.121416  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:54.121439  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:54.146468  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:54.146502  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:54.173087  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:54.173114  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:56.730884  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:56.741016  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:56.741083  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:56.765437  285837 cri.go:89] found id: ""
	I1213 10:08:56.765461  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.765470  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:56.765476  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:56.765535  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:56.804701  285837 cri.go:89] found id: ""
	I1213 10:08:56.804725  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.804734  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:56.804740  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:56.804796  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:56.831548  285837 cri.go:89] found id: ""
	I1213 10:08:56.831573  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.831582  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:56.831588  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:56.831646  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:56.860131  285837 cri.go:89] found id: ""
	I1213 10:08:56.860154  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.860162  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:56.860169  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:56.860223  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:56.884508  285837 cri.go:89] found id: ""
	I1213 10:08:56.884532  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.884540  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:56.884547  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:56.884602  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:56.909197  285837 cri.go:89] found id: ""
	I1213 10:08:56.909223  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.909232  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:56.909238  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:56.909296  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:56.934089  285837 cri.go:89] found id: ""
	I1213 10:08:56.934110  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.934119  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:56.934126  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:56.934183  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:56.958725  285837 cri.go:89] found id: ""
	I1213 10:08:56.958745  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.958754  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:56.958764  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:56.958775  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:57.027824  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:57.017888    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.018557    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.020435    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.021275    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.023085    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:57.017888    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.018557    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.020435    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.021275    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.023085    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:57.027846  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:57.027859  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:57.054139  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:57.054169  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:57.085873  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:57.085903  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:57.144978  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:57.145011  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:59.659171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:59.669569  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:59.669639  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:59.695058  285837 cri.go:89] found id: ""
	I1213 10:08:59.695123  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.695146  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:59.695163  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:59.695255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:59.720734  285837 cri.go:89] found id: ""
	I1213 10:08:59.720799  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.720822  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:59.720840  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:59.720935  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:59.744586  285837 cri.go:89] found id: ""
	I1213 10:08:59.744661  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.744684  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:59.744698  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:59.744770  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:59.771374  285837 cri.go:89] found id: ""
	I1213 10:08:59.771408  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.771417  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:59.771439  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:59.771541  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:59.799406  285837 cri.go:89] found id: ""
	I1213 10:08:59.799441  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.799450  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:59.799473  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:59.799577  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:59.828067  285837 cri.go:89] found id: ""
	I1213 10:08:59.828142  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.828165  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:59.828187  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:59.828255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:59.853064  285837 cri.go:89] found id: ""
	I1213 10:08:59.853130  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.853152  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:59.853174  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:59.853238  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:59.881735  285837 cri.go:89] found id: ""
	I1213 10:08:59.881772  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.881781  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:59.881790  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:59.881820  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:59.909551  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:59.909578  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:59.965746  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:59.965781  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:59.979378  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:59.979407  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:00.187890  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:00.174278    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.176237    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.177484    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.179716    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.181016    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:00.174278    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.176237    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.177484    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.179716    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.181016    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:00.187915  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:00.187930  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:02.742568  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:02.753251  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:02.753340  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:02.786726  285837 cri.go:89] found id: ""
	I1213 10:09:02.786749  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.786758  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:02.786764  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:02.786823  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:02.819145  285837 cri.go:89] found id: ""
	I1213 10:09:02.819166  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.819174  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:02.819193  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:02.819251  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:02.847100  285837 cri.go:89] found id: ""
	I1213 10:09:02.847124  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.847133  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:02.847139  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:02.847202  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:02.873292  285837 cri.go:89] found id: ""
	I1213 10:09:02.873316  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.873325  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:02.873332  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:02.873388  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:02.897520  285837 cri.go:89] found id: ""
	I1213 10:09:02.897544  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.897553  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:02.897560  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:02.897617  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:02.922393  285837 cri.go:89] found id: ""
	I1213 10:09:02.922416  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.922425  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:02.922431  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:02.922490  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:02.947241  285837 cri.go:89] found id: ""
	I1213 10:09:02.947264  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.947272  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:02.947278  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:02.947335  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:02.972679  285837 cri.go:89] found id: ""
	I1213 10:09:02.972704  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.972713  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:02.972722  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:02.972733  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:03.034867  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:03.034909  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:03.052540  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:03.052570  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:03.128351  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:03.118813    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.119444    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121123    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121418    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.124196    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:03.118813    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.119444    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121123    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121418    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.124196    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:03.128373  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:03.128386  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:03.154970  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:03.155008  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:05.683571  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:05.693787  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:05.693854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:05.718259  285837 cri.go:89] found id: ""
	I1213 10:09:05.718282  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.718291  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:05.718297  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:05.718357  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:05.745891  285837 cri.go:89] found id: ""
	I1213 10:09:05.745915  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.745924  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:05.745931  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:05.745987  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:05.782435  285837 cri.go:89] found id: ""
	I1213 10:09:05.782460  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.782469  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:05.782475  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:05.782530  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:05.814908  285837 cri.go:89] found id: ""
	I1213 10:09:05.814951  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.814962  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:05.814969  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:05.815039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:05.841933  285837 cri.go:89] found id: ""
	I1213 10:09:05.841961  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.841971  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:05.841978  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:05.842039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:05.866012  285837 cri.go:89] found id: ""
	I1213 10:09:05.866041  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.866050  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:05.866056  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:05.866115  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:05.890279  285837 cri.go:89] found id: ""
	I1213 10:09:05.890307  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.890315  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:05.890322  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:05.890379  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:05.915405  285837 cri.go:89] found id: ""
	I1213 10:09:05.915428  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.915436  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:05.915446  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:05.915457  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:05.971454  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:05.971486  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:05.984906  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:05.984951  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:06.083616  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:06.064999    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.075679    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.076130    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.077726    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.078044    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:06.064999    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.075679    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.076130    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.077726    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.078044    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:06.083701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:06.083737  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:06.114405  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:06.114443  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:08.641977  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:08.652131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:08.652197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:08.675938  285837 cri.go:89] found id: ""
	I1213 10:09:08.675961  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.675970  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:08.675976  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:08.676038  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:08.702206  285837 cri.go:89] found id: ""
	I1213 10:09:08.702281  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.702304  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:08.702321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:08.702400  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:08.726527  285837 cri.go:89] found id: ""
	I1213 10:09:08.726599  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.726621  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:08.726639  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:08.726726  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:08.751396  285837 cri.go:89] found id: ""
	I1213 10:09:08.751469  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.751492  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:08.751555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:08.751631  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:08.787796  285837 cri.go:89] found id: ""
	I1213 10:09:08.787828  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.787838  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:08.787844  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:08.787908  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:08.819599  285837 cri.go:89] found id: ""
	I1213 10:09:08.819634  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.819643  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:08.819650  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:08.819717  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:08.846345  285837 cri.go:89] found id: ""
	I1213 10:09:08.846372  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.846381  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:08.846387  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:08.846445  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:08.870594  285837 cri.go:89] found id: ""
	I1213 10:09:08.870664  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.870710  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:08.870746  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:08.870797  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:08.928780  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:08.928814  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:08.944017  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:08.944043  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:09.014860  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:09.004450    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.005493    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008097    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008783    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.010658    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:09.004450    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.005493    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008097    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008783    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.010658    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:09.014883  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:09.014896  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:09.047081  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:09.047174  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:11.588198  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:11.600902  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:11.600973  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:11.629261  285837 cri.go:89] found id: ""
	I1213 10:09:11.629286  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.629295  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:11.629301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:11.629362  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:11.653238  285837 cri.go:89] found id: ""
	I1213 10:09:11.653260  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.653269  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:11.653275  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:11.653332  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:11.681922  285837 cri.go:89] found id: ""
	I1213 10:09:11.681946  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.681956  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:11.681962  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:11.682019  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:11.711733  285837 cri.go:89] found id: ""
	I1213 10:09:11.711762  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.711770  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:11.711776  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:11.711834  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:11.736582  285837 cri.go:89] found id: ""
	I1213 10:09:11.736608  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.736616  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:11.736625  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:11.736681  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:11.759927  285837 cri.go:89] found id: ""
	I1213 10:09:11.759951  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.759961  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:11.759967  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:11.760022  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:11.794760  285837 cri.go:89] found id: ""
	I1213 10:09:11.794787  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.794797  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:11.794803  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:11.794862  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:11.822009  285837 cri.go:89] found id: ""
	I1213 10:09:11.822037  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.822047  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:11.822056  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:11.822068  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:11.889206  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:11.881444    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882052    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882987    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.883619    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.885240    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:11.881444    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882052    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882987    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.883619    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.885240    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:11.889228  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:11.889241  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:11.914544  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:11.914576  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:11.944548  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:11.944576  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:12.000427  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:12.000460  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:14.516876  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:14.527580  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:14.527657  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:14.551881  285837 cri.go:89] found id: ""
	I1213 10:09:14.551903  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.551911  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:14.551917  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:14.551977  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:14.576244  285837 cri.go:89] found id: ""
	I1213 10:09:14.576267  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.576275  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:14.576281  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:14.576337  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:14.604979  285837 cri.go:89] found id: ""
	I1213 10:09:14.605002  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.605011  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:14.605017  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:14.605084  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:14.633024  285837 cri.go:89] found id: ""
	I1213 10:09:14.633050  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.633059  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:14.633065  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:14.633123  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:14.661288  285837 cri.go:89] found id: ""
	I1213 10:09:14.661316  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.661324  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:14.661331  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:14.661390  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:14.686665  285837 cri.go:89] found id: ""
	I1213 10:09:14.686694  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.686704  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:14.686711  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:14.686769  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:14.712111  285837 cri.go:89] found id: ""
	I1213 10:09:14.712139  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.712148  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:14.712156  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:14.712212  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:14.740346  285837 cri.go:89] found id: ""
	I1213 10:09:14.740392  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.740401  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:14.740410  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:14.740423  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:14.753460  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:14.753488  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:14.834789  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:14.826269    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.827206    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.828874    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.829190    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.830588    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:14.826269    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.827206    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.828874    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.829190    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.830588    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:14.834812  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:14.834824  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:14.859634  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:14.859666  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:14.890753  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:14.890826  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:17.450898  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:17.461075  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:17.461145  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:17.486593  285837 cri.go:89] found id: ""
	I1213 10:09:17.486616  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.486625  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:17.486632  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:17.486689  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:17.511138  285837 cri.go:89] found id: ""
	I1213 10:09:17.511214  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.511230  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:17.511237  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:17.511302  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:17.535780  285837 cri.go:89] found id: ""
	I1213 10:09:17.535808  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.535818  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:17.535824  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:17.535879  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:17.559884  285837 cri.go:89] found id: ""
	I1213 10:09:17.559907  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.559916  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:17.559922  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:17.559983  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:17.588420  285837 cri.go:89] found id: ""
	I1213 10:09:17.588446  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.588456  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:17.588462  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:17.588520  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:17.616357  285837 cri.go:89] found id: ""
	I1213 10:09:17.616427  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.616450  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:17.616470  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:17.616553  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:17.640411  285837 cri.go:89] found id: ""
	I1213 10:09:17.640481  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.640506  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:17.640525  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:17.640606  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:17.670821  285837 cri.go:89] found id: ""
	I1213 10:09:17.670887  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.670910  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:17.670931  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:17.670976  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:17.730483  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:17.730517  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:17.743937  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:17.743965  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:17.835718  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:17.824527    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.825248    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.826849    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.827326    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.828934    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:17.824527    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.825248    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.826849    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.827326    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.828934    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:17.835789  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:17.835817  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:17.865207  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:17.865241  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:20.392780  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:20.403097  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:20.403162  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:20.428028  285837 cri.go:89] found id: ""
	I1213 10:09:20.428060  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.428069  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:20.428076  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:20.428141  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:20.452273  285837 cri.go:89] found id: ""
	I1213 10:09:20.452297  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.452305  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:20.452312  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:20.452375  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:20.476828  285837 cri.go:89] found id: ""
	I1213 10:09:20.476852  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.476860  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:20.476866  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:20.476922  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:20.500929  285837 cri.go:89] found id: ""
	I1213 10:09:20.500952  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.500968  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:20.500975  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:20.501033  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:20.528180  285837 cri.go:89] found id: ""
	I1213 10:09:20.528207  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.528217  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:20.528223  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:20.528284  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:20.553290  285837 cri.go:89] found id: ""
	I1213 10:09:20.553314  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.553323  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:20.553330  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:20.553386  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:20.577422  285837 cri.go:89] found id: ""
	I1213 10:09:20.577446  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.577455  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:20.577464  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:20.577518  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:20.601597  285837 cri.go:89] found id: ""
	I1213 10:09:20.601623  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.601632  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:20.601643  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:20.601654  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:20.656521  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:20.656556  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:20.669890  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:20.669920  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:20.737784  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:20.729553    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.730242    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.731915    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.732434    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.734060    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:20.729553    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.730242    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.731915    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.732434    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.734060    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:20.737806  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:20.737818  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:20.762811  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:20.762845  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:23.299625  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:23.311059  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:23.311129  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:23.338174  285837 cri.go:89] found id: ""
	I1213 10:09:23.338197  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.338205  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:23.338211  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:23.338269  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:23.363653  285837 cri.go:89] found id: ""
	I1213 10:09:23.363674  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.363683  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:23.363688  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:23.363750  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:23.387166  285837 cri.go:89] found id: ""
	I1213 10:09:23.387187  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.387195  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:23.387201  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:23.387257  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:23.411627  285837 cri.go:89] found id: ""
	I1213 10:09:23.411650  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.411659  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:23.411665  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:23.411731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:23.440839  285837 cri.go:89] found id: ""
	I1213 10:09:23.440866  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.440885  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:23.440892  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:23.440950  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:23.464835  285837 cri.go:89] found id: ""
	I1213 10:09:23.464857  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.464866  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:23.464872  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:23.464927  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:23.489635  285837 cri.go:89] found id: ""
	I1213 10:09:23.489659  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.489668  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:23.489675  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:23.489762  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:23.513816  285837 cri.go:89] found id: ""
	I1213 10:09:23.513847  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.513855  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:23.513865  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:23.513875  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:23.539139  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:23.539173  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:23.565435  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:23.565463  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:23.622023  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:23.622058  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:23.635231  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:23.635263  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:23.699057  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:23.690976    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.691550    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693329    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693735    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.695223    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:23.690976    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.691550    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693329    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693735    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.695223    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:26.200117  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:26.210617  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:26.210696  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:26.235048  285837 cri.go:89] found id: ""
	I1213 10:09:26.235076  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.235085  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:26.235092  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:26.235148  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:26.259259  285837 cri.go:89] found id: ""
	I1213 10:09:26.259285  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.259294  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:26.259300  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:26.259355  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:26.291742  285837 cri.go:89] found id: ""
	I1213 10:09:26.291767  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.291776  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:26.291782  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:26.291864  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:26.320200  285837 cri.go:89] found id: ""
	I1213 10:09:26.320225  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.320234  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:26.320240  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:26.320296  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:26.347996  285837 cri.go:89] found id: ""
	I1213 10:09:26.348023  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.348033  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:26.348039  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:26.348097  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:26.376752  285837 cri.go:89] found id: ""
	I1213 10:09:26.376816  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.376830  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:26.376837  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:26.376893  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:26.404777  285837 cri.go:89] found id: ""
	I1213 10:09:26.404802  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.404811  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:26.404817  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:26.404876  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:26.428882  285837 cri.go:89] found id: ""
	I1213 10:09:26.428904  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.428913  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:26.428922  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:26.428933  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:26.489455  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:26.489494  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:26.504291  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:26.504320  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:26.573661  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:26.564906    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.565725    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567441    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567990    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.569686    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:26.564906    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.565725    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567441    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567990    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.569686    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:26.573684  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:26.573698  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:26.599463  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:26.599496  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:29.127681  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:29.138010  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:29.138081  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:29.161918  285837 cri.go:89] found id: ""
	I1213 10:09:29.161989  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.162013  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:29.162031  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:29.162114  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:29.186603  285837 cri.go:89] found id: ""
	I1213 10:09:29.186678  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.186700  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:29.186717  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:29.186798  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:29.210425  285837 cri.go:89] found id: ""
	I1213 10:09:29.210489  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.210512  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:29.210529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:29.210614  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:29.237345  285837 cri.go:89] found id: ""
	I1213 10:09:29.237369  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.237377  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:29.237384  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:29.237440  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:29.260918  285837 cri.go:89] found id: ""
	I1213 10:09:29.260997  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.261013  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:29.261020  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:29.261075  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:29.289712  285837 cri.go:89] found id: ""
	I1213 10:09:29.289738  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.289747  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:29.289753  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:29.289808  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:29.321797  285837 cri.go:89] found id: ""
	I1213 10:09:29.321821  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.321831  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:29.321839  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:29.321895  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:29.353498  285837 cri.go:89] found id: ""
	I1213 10:09:29.353523  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.353532  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:29.353542  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:29.353582  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:29.415160  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:29.407188    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.407994    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409598    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409900    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.411394    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:29.407188    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.407994    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409598    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409900    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.411394    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:29.415183  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:29.415198  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:29.440924  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:29.440961  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:29.468916  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:29.468944  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:29.528468  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:29.528501  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:32.042457  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:32.054480  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:32.054563  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:32.088256  285837 cri.go:89] found id: ""
	I1213 10:09:32.088282  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.088290  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:32.088296  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:32.088382  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:32.114080  285837 cri.go:89] found id: ""
	I1213 10:09:32.114102  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.114110  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:32.114116  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:32.114195  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:32.138708  285837 cri.go:89] found id: ""
	I1213 10:09:32.138732  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.138740  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:32.138746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:32.138851  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:32.163676  285837 cri.go:89] found id: ""
	I1213 10:09:32.163706  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.163715  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:32.163721  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:32.163780  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:32.188486  285837 cri.go:89] found id: ""
	I1213 10:09:32.188565  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.188582  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:32.188589  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:32.188652  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:32.212912  285837 cri.go:89] found id: ""
	I1213 10:09:32.212936  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.212945  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:32.212951  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:32.213034  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:32.242067  285837 cri.go:89] found id: ""
	I1213 10:09:32.242090  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.242099  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:32.242106  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:32.242163  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:32.280832  285837 cri.go:89] found id: ""
	I1213 10:09:32.280855  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.280864  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:32.280874  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:32.280885  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:32.344925  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:32.344963  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:32.359370  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:32.359400  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:32.425438  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:32.416924    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.417557    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419154    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419748    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.421439    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:32.416924    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.417557    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419154    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419748    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.421439    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:32.425459  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:32.425472  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:32.449956  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:32.449990  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:34.978245  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:34.989159  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:34.989236  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:35.017235  285837 cri.go:89] found id: ""
	I1213 10:09:35.017258  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.017267  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:35.017273  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:35.017341  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:35.050437  285837 cri.go:89] found id: ""
	I1213 10:09:35.050458  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.050467  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:35.050473  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:35.050529  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:35.085905  285837 cri.go:89] found id: ""
	I1213 10:09:35.085926  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.085935  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:35.085941  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:35.085994  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:35.118261  285837 cri.go:89] found id: ""
	I1213 10:09:35.118283  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.118292  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:35.118299  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:35.118360  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:35.144531  285837 cri.go:89] found id: ""
	I1213 10:09:35.144555  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.144563  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:35.144569  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:35.144627  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:35.170241  285837 cri.go:89] found id: ""
	I1213 10:09:35.170317  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.170340  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:35.170359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:35.170433  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:35.195958  285837 cri.go:89] found id: ""
	I1213 10:09:35.195986  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.195995  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:35.196001  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:35.196066  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:35.220509  285837 cri.go:89] found id: ""
	I1213 10:09:35.220535  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.220544  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:35.220553  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:35.220563  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:35.276863  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:35.277042  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:35.294239  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:35.294265  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:35.367085  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:35.358803    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.359461    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361052    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361604    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.363156    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:35.358803    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.359461    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361052    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361604    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.363156    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:35.367108  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:35.367121  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:35.392804  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:35.392842  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:37.919692  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:37.929805  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:37.929875  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:37.954708  285837 cri.go:89] found id: ""
	I1213 10:09:37.954782  285837 logs.go:282] 0 containers: []
	W1213 10:09:37.954806  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:37.954825  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:37.954914  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:37.979259  285837 cri.go:89] found id: ""
	I1213 10:09:37.979332  285837 logs.go:282] 0 containers: []
	W1213 10:09:37.979357  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:37.979375  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:37.979459  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:38.008473  285837 cri.go:89] found id: ""
	I1213 10:09:38.008554  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.008579  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:38.008597  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:38.008695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:38.051746  285837 cri.go:89] found id: ""
	I1213 10:09:38.051820  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.051843  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:38.051863  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:38.051957  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:38.082373  285837 cri.go:89] found id: ""
	I1213 10:09:38.082405  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.082413  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:38.082419  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:38.082477  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:38.109623  285837 cri.go:89] found id: ""
	I1213 10:09:38.109646  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.109655  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:38.109661  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:38.109718  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:38.133779  285837 cri.go:89] found id: ""
	I1213 10:09:38.133807  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.133815  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:38.133822  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:38.133892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:38.158199  285837 cri.go:89] found id: ""
	I1213 10:09:38.158263  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.158286  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:38.158338  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:38.158371  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:38.171856  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:38.171885  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:38.237998  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:38.230230    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.230727    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232222    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232668    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.234110    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:38.230230    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.230727    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232222    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232668    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.234110    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:38.238021  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:38.238033  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:38.263694  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:38.263729  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:38.301569  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:38.301594  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:40.863927  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:40.874647  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:40.874715  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:40.902898  285837 cri.go:89] found id: ""
	I1213 10:09:40.902922  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.902931  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:40.902939  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:40.903000  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:40.928251  285837 cri.go:89] found id: ""
	I1213 10:09:40.928277  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.928287  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:40.928294  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:40.928350  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:40.952178  285837 cri.go:89] found id: ""
	I1213 10:09:40.952201  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.952210  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:40.952216  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:40.952271  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:40.980522  285837 cri.go:89] found id: ""
	I1213 10:09:40.980548  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.980557  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:40.980564  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:40.980620  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:41.007391  285837 cri.go:89] found id: ""
	I1213 10:09:41.007417  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.007427  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:41.007433  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:41.007498  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:41.056690  285837 cri.go:89] found id: ""
	I1213 10:09:41.056762  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.056786  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:41.056806  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:41.056892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:41.082372  285837 cri.go:89] found id: ""
	I1213 10:09:41.082443  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.082481  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:41.082505  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:41.082592  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:41.106556  285837 cri.go:89] found id: ""
	I1213 10:09:41.106626  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.106648  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:41.106680  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:41.106722  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:41.162248  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:41.162281  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:41.175724  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:41.175753  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:41.243327  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:41.234749    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.235634    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237273    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237827    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.239427    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:41.234749    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.235634    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237273    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237827    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.239427    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:41.243393  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:41.243420  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:41.269060  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:41.269142  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:43.812670  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:43.823281  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:43.823360  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:43.846549  285837 cri.go:89] found id: ""
	I1213 10:09:43.846571  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.846579  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:43.846585  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:43.846640  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:43.879456  285837 cri.go:89] found id: ""
	I1213 10:09:43.879541  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.879557  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:43.879563  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:43.879632  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:43.904717  285837 cri.go:89] found id: ""
	I1213 10:09:43.904745  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.904755  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:43.904761  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:43.904818  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:43.929847  285837 cri.go:89] found id: ""
	I1213 10:09:43.929873  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.929883  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:43.929890  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:43.929950  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:43.954073  285837 cri.go:89] found id: ""
	I1213 10:09:43.954146  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.954168  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:43.954187  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:43.954278  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:43.979175  285837 cri.go:89] found id: ""
	I1213 10:09:43.979257  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.979280  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:43.979299  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:43.979406  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:44.013549  285837 cri.go:89] found id: ""
	I1213 10:09:44.013574  285837 logs.go:282] 0 containers: []
	W1213 10:09:44.013584  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:44.013590  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:44.013653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:44.043145  285837 cri.go:89] found id: ""
	I1213 10:09:44.043222  285837 logs.go:282] 0 containers: []
	W1213 10:09:44.043244  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:44.043267  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:44.043306  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:44.058657  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:44.058685  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:44.137763  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:44.128648    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.129161    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.130836    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.131881    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.132537    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:44.128648    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.129161    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.130836    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.131881    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.132537    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:44.137786  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:44.137799  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:44.163596  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:44.163630  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:44.193981  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:44.194008  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:46.751860  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:46.762578  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:46.762653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:46.787138  285837 cri.go:89] found id: ""
	I1213 10:09:46.787161  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.787170  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:46.787176  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:46.787234  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:46.812348  285837 cri.go:89] found id: ""
	I1213 10:09:46.812371  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.812379  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:46.812386  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:46.812445  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:46.840689  285837 cri.go:89] found id: ""
	I1213 10:09:46.840712  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.840721  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:46.840727  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:46.840784  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:46.870288  285837 cri.go:89] found id: ""
	I1213 10:09:46.870313  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.870322  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:46.870328  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:46.870450  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:46.896231  285837 cri.go:89] found id: ""
	I1213 10:09:46.896255  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.896269  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:46.896276  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:46.896334  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:46.921572  285837 cri.go:89] found id: ""
	I1213 10:09:46.921604  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.921613  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:46.921636  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:46.921721  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:46.948191  285837 cri.go:89] found id: ""
	I1213 10:09:46.948220  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.948229  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:46.948236  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:46.948365  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:46.977518  285837 cri.go:89] found id: ""
	I1213 10:09:46.977585  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.977602  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:46.977612  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:46.977624  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:47.034861  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:47.034901  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:47.049608  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:47.049638  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:47.120624  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:47.111262    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.111986    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.113725    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.114377    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.116185    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:47.111262    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.111986    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.113725    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.114377    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.116185    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:47.120648  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:47.120662  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:47.146083  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:47.146118  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:49.676188  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:49.688330  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:49.688400  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:49.714933  285837 cri.go:89] found id: ""
	I1213 10:09:49.714958  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.714967  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:49.714973  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:49.715035  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:49.739883  285837 cri.go:89] found id: ""
	I1213 10:09:49.739912  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.739923  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:49.739931  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:49.739990  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:49.768673  285837 cri.go:89] found id: ""
	I1213 10:09:49.768699  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.768718  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:49.768726  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:49.768788  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:49.794628  285837 cri.go:89] found id: ""
	I1213 10:09:49.794694  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.794717  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:49.794735  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:49.794822  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:49.819205  285837 cri.go:89] found id: ""
	I1213 10:09:49.819237  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.819247  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:49.819253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:49.819318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:49.843189  285837 cri.go:89] found id: ""
	I1213 10:09:49.843212  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.843228  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:49.843235  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:49.843303  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:49.867965  285837 cri.go:89] found id: ""
	I1213 10:09:49.867998  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.868008  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:49.868016  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:49.868089  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:49.891561  285837 cri.go:89] found id: ""
	I1213 10:09:49.891586  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.891595  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:49.891605  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:49.891629  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:49.953785  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:49.953824  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:49.967425  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:49.967453  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:50.041318  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:50.031798    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033081    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033890    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035607    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035911    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:50.031798    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033081    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033890    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035607    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035911    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:50.041391  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:50.041419  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:50.070955  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:50.071029  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:52.603479  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:52.615038  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:52.615113  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:52.643538  285837 cri.go:89] found id: ""
	I1213 10:09:52.643561  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.643570  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:52.643577  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:52.643636  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:52.668477  285837 cri.go:89] found id: ""
	I1213 10:09:52.668514  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.668523  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:52.668530  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:52.668586  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:52.695551  285837 cri.go:89] found id: ""
	I1213 10:09:52.695574  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.695582  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:52.695589  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:52.695647  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:52.723965  285837 cri.go:89] found id: ""
	I1213 10:09:52.723991  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.724000  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:52.724007  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:52.724061  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:52.748159  285837 cri.go:89] found id: ""
	I1213 10:09:52.748186  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.748195  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:52.748202  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:52.748257  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:52.771805  285837 cri.go:89] found id: ""
	I1213 10:09:52.771836  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.771846  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:52.771853  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:52.771910  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:52.795549  285837 cri.go:89] found id: ""
	I1213 10:09:52.795574  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.795584  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:52.795590  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:52.795650  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:52.819748  285837 cri.go:89] found id: ""
	I1213 10:09:52.819775  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.819785  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:52.819794  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:52.819805  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:52.882031  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:52.873734    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.874381    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876013    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876486    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.878274    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:52.873734    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.874381    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876013    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876486    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.878274    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:52.882051  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:52.882062  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:52.907759  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:52.907795  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:52.934360  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:52.934390  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:52.989946  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:52.989982  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:55.503671  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:55.514125  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:55.514196  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:55.540594  285837 cri.go:89] found id: ""
	I1213 10:09:55.540621  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.540631  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:55.540637  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:55.540694  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:55.570352  285837 cri.go:89] found id: ""
	I1213 10:09:55.570378  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.570387  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:55.570395  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:55.570450  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:55.596509  285837 cri.go:89] found id: ""
	I1213 10:09:55.596533  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.596541  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:55.596547  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:55.596604  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:55.622553  285837 cri.go:89] found id: ""
	I1213 10:09:55.622579  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.622587  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:55.622593  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:55.622650  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:55.647770  285837 cri.go:89] found id: ""
	I1213 10:09:55.647794  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.647803  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:55.647809  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:55.647874  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:55.672615  285837 cri.go:89] found id: ""
	I1213 10:09:55.672679  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.672693  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:55.672701  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:55.672756  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:55.697017  285837 cri.go:89] found id: ""
	I1213 10:09:55.697041  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.697050  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:55.697063  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:55.697123  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:55.720795  285837 cri.go:89] found id: ""
	I1213 10:09:55.720866  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.720891  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:55.720914  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:55.720950  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:55.745823  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:55.745857  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:55.774634  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:55.774663  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:55.830064  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:55.830098  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:55.843868  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:55.843896  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:55.905758  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:55.897123    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.897694    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899210    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899757    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.901522    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:55.897123    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.897694    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899210    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899757    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.901522    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:58.406072  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:58.418120  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:58.418199  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:58.443021  285837 cri.go:89] found id: ""
	I1213 10:09:58.443050  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.443059  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:58.443066  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:58.443126  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:58.468115  285837 cri.go:89] found id: ""
	I1213 10:09:58.468139  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.468147  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:58.468154  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:58.468214  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:58.496991  285837 cri.go:89] found id: ""
	I1213 10:09:58.497015  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.497025  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:58.497032  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:58.497098  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:58.530053  285837 cri.go:89] found id: ""
	I1213 10:09:58.530076  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.530085  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:58.530091  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:58.530149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:58.561990  285837 cri.go:89] found id: ""
	I1213 10:09:58.562013  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.562022  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:58.562028  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:58.562091  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:58.595912  285837 cri.go:89] found id: ""
	I1213 10:09:58.595984  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.596007  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:58.596026  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:58.596113  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:58.626521  285837 cri.go:89] found id: ""
	I1213 10:09:58.626593  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.626616  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:58.626635  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:58.626720  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:58.655898  285837 cri.go:89] found id: ""
	I1213 10:09:58.655963  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.655987  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:58.656008  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:58.656032  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:58.711709  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:58.711741  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:58.726942  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:58.726969  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:58.798293  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:58.790256    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.791061    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.792601    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.793006    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.794454    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:58.790256    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.791061    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.792601    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.793006    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.794454    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:58.798314  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:58.798327  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:58.822936  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:58.822973  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:01.351670  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:01.362442  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:01.362517  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:01.388700  285837 cri.go:89] found id: ""
	I1213 10:10:01.388734  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.388744  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:01.388751  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:01.388824  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:01.418393  285837 cri.go:89] found id: ""
	I1213 10:10:01.418471  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.418496  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:01.418515  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:01.418602  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:01.449860  285837 cri.go:89] found id: ""
	I1213 10:10:01.449937  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.449962  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:01.449980  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:01.450064  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:01.475973  285837 cri.go:89] found id: ""
	I1213 10:10:01.476035  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.476049  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:01.476056  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:01.476118  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:01.501452  285837 cri.go:89] found id: ""
	I1213 10:10:01.501474  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.501499  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:01.501506  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:01.501576  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:01.527738  285837 cri.go:89] found id: ""
	I1213 10:10:01.527808  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.527832  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:01.527852  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:01.527946  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:01.553256  285837 cri.go:89] found id: ""
	I1213 10:10:01.553280  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.553289  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:01.553296  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:01.553354  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:01.578833  285837 cri.go:89] found id: ""
	I1213 10:10:01.578855  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.578864  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:01.578875  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:01.578892  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:01.634755  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:01.634790  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:01.649799  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:01.649832  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:01.721470  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:01.711679    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.712530    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715342    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715994    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.717520    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:01.711679    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.712530    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715342    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715994    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.717520    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:01.721491  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:01.721504  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:01.747322  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:01.747357  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:04.288307  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:04.300683  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:04.300805  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:04.332215  285837 cri.go:89] found id: ""
	I1213 10:10:04.332242  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.332252  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:04.332259  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:04.332318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:04.358136  285837 cri.go:89] found id: ""
	I1213 10:10:04.358164  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.358173  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:04.358180  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:04.358248  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:04.383446  285837 cri.go:89] found id: ""
	I1213 10:10:04.383479  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.383488  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:04.383493  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:04.383578  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:04.408888  285837 cri.go:89] found id: ""
	I1213 10:10:04.408914  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.408923  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:04.408930  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:04.409009  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:04.438109  285837 cri.go:89] found id: ""
	I1213 10:10:04.438145  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.438155  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:04.438163  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:04.438233  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:04.462623  285837 cri.go:89] found id: ""
	I1213 10:10:04.462692  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.462725  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:04.462745  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:04.462826  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:04.488102  285837 cri.go:89] found id: ""
	I1213 10:10:04.488127  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.488137  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:04.488143  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:04.488230  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:04.515038  285837 cri.go:89] found id: ""
	I1213 10:10:04.515078  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.515087  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:04.515096  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:04.515134  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:04.540448  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:04.540483  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:04.570913  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:04.570942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:04.626396  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:04.626430  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:04.639908  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:04.639938  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:04.704410  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:04.696311    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.697396    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.698596    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.699329    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.700060    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:04.696311    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.697396    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.698596    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.699329    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.700060    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:07.204629  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:07.215001  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:07.215080  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:07.239145  285837 cri.go:89] found id: ""
	I1213 10:10:07.239170  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.239180  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:07.239186  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:07.239243  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:07.263051  285837 cri.go:89] found id: ""
	I1213 10:10:07.263077  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.263086  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:07.263092  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:07.263149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:07.293024  285837 cri.go:89] found id: ""
	I1213 10:10:07.293051  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.293060  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:07.293066  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:07.293142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:07.320096  285837 cri.go:89] found id: ""
	I1213 10:10:07.320119  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.320128  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:07.320133  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:07.320189  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:07.349635  285837 cri.go:89] found id: ""
	I1213 10:10:07.349661  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.349670  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:07.349676  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:07.349733  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:07.374644  285837 cri.go:89] found id: ""
	I1213 10:10:07.374720  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.374744  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:07.374767  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:07.374875  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:07.399088  285837 cri.go:89] found id: ""
	I1213 10:10:07.399108  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.399117  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:07.399123  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:07.399179  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:07.423187  285837 cri.go:89] found id: ""
	I1213 10:10:07.423210  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.423219  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:07.423229  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:07.423244  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:07.478648  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:07.478682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:07.492218  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:07.492247  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:07.558077  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:07.550399    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.550906    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552488    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552905    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.554328    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:07.550399    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.550906    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552488    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552905    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.554328    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:07.558147  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:07.558168  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:07.583061  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:07.583093  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:10.116593  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:10.127456  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:10.127551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:10.157660  285837 cri.go:89] found id: ""
	I1213 10:10:10.157684  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.157693  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:10.157699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:10.157758  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:10.183132  285837 cri.go:89] found id: ""
	I1213 10:10:10.183166  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.183175  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:10.183181  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:10.183248  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:10.209615  285837 cri.go:89] found id: ""
	I1213 10:10:10.209681  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.209704  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:10.209723  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:10.209817  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:10.234760  285837 cri.go:89] found id: ""
	I1213 10:10:10.234789  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.234798  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:10.234804  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:10.234877  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:10.261577  285837 cri.go:89] found id: ""
	I1213 10:10:10.261608  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.261618  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:10.261624  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:10.261682  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:10.289616  285837 cri.go:89] found id: ""
	I1213 10:10:10.289655  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.289664  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:10.289670  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:10.289742  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:10.316640  285837 cri.go:89] found id: ""
	I1213 10:10:10.316684  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.316693  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:10.316699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:10.316768  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:10.346038  285837 cri.go:89] found id: ""
	I1213 10:10:10.346065  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.346074  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:10.346084  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:10.346095  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:10.377589  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:10.377669  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:10.435680  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:10.435714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:10.449198  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:10.449226  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:10.521596  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:10.513247    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.514113    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.515874    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.516173    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.517662    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:10.513247    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.514113    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.515874    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.516173    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.517662    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:10.521619  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:10.521632  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:13.047644  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:13.059744  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:13.059820  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:13.087860  285837 cri.go:89] found id: ""
	I1213 10:10:13.087901  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.087911  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:13.087918  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:13.087983  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:13.112735  285837 cri.go:89] found id: ""
	I1213 10:10:13.112802  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.112844  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:13.112876  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:13.112953  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:13.141197  285837 cri.go:89] found id: ""
	I1213 10:10:13.141223  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.141244  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:13.141255  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:13.141315  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:13.165043  285837 cri.go:89] found id: ""
	I1213 10:10:13.165119  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.165143  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:13.165155  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:13.165240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:13.189664  285837 cri.go:89] found id: ""
	I1213 10:10:13.189746  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.189769  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:13.189782  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:13.189854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:13.213620  285837 cri.go:89] found id: ""
	I1213 10:10:13.213686  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.213709  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:13.213723  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:13.213799  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:13.241644  285837 cri.go:89] found id: ""
	I1213 10:10:13.241667  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.241676  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:13.241728  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:13.241812  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:13.265927  285837 cri.go:89] found id: ""
	I1213 10:10:13.265997  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.266030  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:13.266053  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:13.266079  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:13.293162  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:13.293239  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:13.326250  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:13.326334  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:13.386676  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:13.386710  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:13.400810  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:13.400838  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:13.469704  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:13.461055    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.462337    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.463891    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.464211    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.465542    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:13.461055    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.462337    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.463891    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.464211    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.465542    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:15.969962  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:15.980347  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:15.980492  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:16.010088  285837 cri.go:89] found id: ""
	I1213 10:10:16.010118  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.010127  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:16.010133  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:16.010196  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:16.049187  285837 cri.go:89] found id: ""
	I1213 10:10:16.049209  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.049217  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:16.049223  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:16.049291  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:16.077965  285837 cri.go:89] found id: ""
	I1213 10:10:16.077987  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.077996  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:16.078002  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:16.078058  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:16.108378  285837 cri.go:89] found id: ""
	I1213 10:10:16.108451  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.108474  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:16.108492  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:16.108577  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:16.134213  285837 cri.go:89] found id: ""
	I1213 10:10:16.134235  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.134244  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:16.134250  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:16.134310  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:16.160222  285837 cri.go:89] found id: ""
	I1213 10:10:16.160255  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.160266  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:16.160273  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:16.160343  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:16.188619  285837 cri.go:89] found id: ""
	I1213 10:10:16.188646  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.188655  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:16.188662  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:16.188725  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:16.213285  285837 cri.go:89] found id: ""
	I1213 10:10:16.213358  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.213375  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:16.213387  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:16.213398  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:16.241893  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:16.241922  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:16.298312  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:16.298349  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:16.312327  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:16.312403  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:16.384024  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:16.375836    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.376424    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.377950    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.378379    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.379873    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:16.375836    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.376424    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.377950    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.378379    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.379873    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:16.384050  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:16.384064  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:18.909524  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:18.920391  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:18.920459  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:18.945319  285837 cri.go:89] found id: ""
	I1213 10:10:18.945358  285837 logs.go:282] 0 containers: []
	W1213 10:10:18.945367  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:18.945374  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:18.945431  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:18.968360  285837 cri.go:89] found id: ""
	I1213 10:10:18.968381  285837 logs.go:282] 0 containers: []
	W1213 10:10:18.968390  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:18.968420  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:18.968476  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:18.992303  285837 cri.go:89] found id: ""
	I1213 10:10:18.992324  285837 logs.go:282] 0 containers: []
	W1213 10:10:18.992333  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:18.992339  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:18.992393  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:19.017601  285837 cri.go:89] found id: ""
	I1213 10:10:19.017677  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.017700  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:19.017718  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:19.017814  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:19.057563  285837 cri.go:89] found id: ""
	I1213 10:10:19.057636  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.057672  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:19.057695  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:19.057783  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:19.089906  285837 cri.go:89] found id: ""
	I1213 10:10:19.089929  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.089938  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:19.089944  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:19.090014  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:19.115237  285837 cri.go:89] found id: ""
	I1213 10:10:19.115258  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.115266  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:19.115272  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:19.115351  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:19.140000  285837 cri.go:89] found id: ""
	I1213 10:10:19.140067  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.140090  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:19.140112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:19.140150  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:19.201866  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:19.193342    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.193945    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195407    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195879    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.197366    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:19.193342    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.193945    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195407    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195879    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.197366    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:19.201888  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:19.201900  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:19.227103  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:19.227135  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:19.253635  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:19.253664  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:19.317211  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:19.317245  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:21.835317  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:21.848786  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:21.848905  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:21.873912  285837 cri.go:89] found id: ""
	I1213 10:10:21.873938  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.873947  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:21.873966  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:21.874030  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:21.898927  285837 cri.go:89] found id: ""
	I1213 10:10:21.898948  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.898957  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:21.898963  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:21.899017  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:21.928040  285837 cri.go:89] found id: ""
	I1213 10:10:21.928067  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.928076  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:21.928083  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:21.928139  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:21.952762  285837 cri.go:89] found id: ""
	I1213 10:10:21.952784  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.952793  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:21.952800  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:21.952862  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:21.977394  285837 cri.go:89] found id: ""
	I1213 10:10:21.977421  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.977430  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:21.977437  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:21.977502  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:22.001693  285837 cri.go:89] found id: ""
	I1213 10:10:22.001729  285837 logs.go:282] 0 containers: []
	W1213 10:10:22.001739  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:22.001746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:22.001813  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:22.044074  285837 cri.go:89] found id: ""
	I1213 10:10:22.044111  285837 logs.go:282] 0 containers: []
	W1213 10:10:22.044120  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:22.044126  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:22.044203  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:22.083324  285837 cri.go:89] found id: ""
	I1213 10:10:22.083361  285837 logs.go:282] 0 containers: []
	W1213 10:10:22.083370  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:22.083380  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:22.083392  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:22.152550  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:22.144496    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.145029    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.146843    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.147160    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.148682    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:22.144496    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.145029    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.146843    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.147160    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.148682    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:22.152574  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:22.152590  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:22.177867  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:22.177900  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:22.205266  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:22.205296  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:22.260906  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:22.260942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:24.776001  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:24.787300  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:24.787370  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:24.817822  285837 cri.go:89] found id: ""
	I1213 10:10:24.817967  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.817991  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:24.818032  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:24.818131  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:24.843042  285837 cri.go:89] found id: ""
	I1213 10:10:24.843079  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.843088  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:24.843094  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:24.843160  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:24.866977  285837 cri.go:89] found id: ""
	I1213 10:10:24.867012  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.867022  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:24.867029  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:24.867100  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:24.892141  285837 cri.go:89] found id: ""
	I1213 10:10:24.892167  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.892177  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:24.892183  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:24.892258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:24.922137  285837 cri.go:89] found id: ""
	I1213 10:10:24.922207  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.922230  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:24.922248  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:24.922343  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:24.954689  285837 cri.go:89] found id: ""
	I1213 10:10:24.954720  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.954729  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:24.954736  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:24.954802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:24.979305  285837 cri.go:89] found id: ""
	I1213 10:10:24.979379  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.979400  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:24.979420  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:24.979545  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:25.012112  285837 cri.go:89] found id: ""
	I1213 10:10:25.012139  285837 logs.go:282] 0 containers: []
	W1213 10:10:25.012149  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:25.012163  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:25.012177  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:25.083061  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:25.083100  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:25.100686  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:25.100713  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:25.172319  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:25.162652    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.163028    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166188    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166902    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.168493    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:25.162652    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.163028    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166188    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166902    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.168493    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:25.172341  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:25.172354  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:25.198195  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:25.198230  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:27.728458  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:27.739147  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:27.739212  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:27.768935  285837 cri.go:89] found id: ""
	I1213 10:10:27.768964  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.768973  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:27.768980  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:27.769069  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:27.793269  285837 cri.go:89] found id: ""
	I1213 10:10:27.793294  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.793303  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:27.793309  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:27.793381  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:27.819458  285837 cri.go:89] found id: ""
	I1213 10:10:27.819481  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.819490  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:27.819496  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:27.819585  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:27.844796  285837 cri.go:89] found id: ""
	I1213 10:10:27.844819  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.844828  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:27.844834  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:27.844892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:27.873605  285837 cri.go:89] found id: ""
	I1213 10:10:27.873629  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.873638  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:27.873644  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:27.873726  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:27.897452  285837 cri.go:89] found id: ""
	I1213 10:10:27.897476  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.897485  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:27.897491  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:27.897548  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:27.923761  285837 cri.go:89] found id: ""
	I1213 10:10:27.923786  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.923796  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:27.923802  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:27.923880  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:27.952811  285837 cri.go:89] found id: ""
	I1213 10:10:27.952875  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.952907  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:27.952949  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:27.952978  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:27.982369  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:27.982444  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:28.039695  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:28.039739  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:28.059367  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:28.059394  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:28.141898  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:28.133880    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.134265    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.135970    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.136432    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.138063    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:28.133880    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.134265    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.135970    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.136432    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.138063    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:28.141920  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:28.141931  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:30.668303  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:30.681191  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:30.681264  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:30.708785  285837 cri.go:89] found id: ""
	I1213 10:10:30.708809  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.708817  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:30.708823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:30.708887  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:30.733895  285837 cri.go:89] found id: ""
	I1213 10:10:30.733918  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.733926  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:30.733932  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:30.733991  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:30.762790  285837 cri.go:89] found id: ""
	I1213 10:10:30.762811  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.762820  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:30.762826  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:30.762891  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:30.786743  285837 cri.go:89] found id: ""
	I1213 10:10:30.786807  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.786829  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:30.786846  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:30.786925  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:30.813249  285837 cri.go:89] found id: ""
	I1213 10:10:30.813272  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.813281  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:30.813288  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:30.813347  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:30.837491  285837 cri.go:89] found id: ""
	I1213 10:10:30.837520  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.837529  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:30.837536  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:30.837596  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:30.862539  285837 cri.go:89] found id: ""
	I1213 10:10:30.862599  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.862622  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:30.862640  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:30.862714  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:30.887350  285837 cri.go:89] found id: ""
	I1213 10:10:30.887371  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.887379  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:30.887388  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:30.887399  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:30.943669  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:30.943701  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:30.957123  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:30.957172  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:31.036468  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:31.016437    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.017222    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.023761    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.024601    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.026097    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:31.016437    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.017222    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.023761    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.024601    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.026097    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:31.036496  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:31.036509  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:31.065951  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:31.065987  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:33.600787  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:33.611280  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:33.611352  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:33.640061  285837 cri.go:89] found id: ""
	I1213 10:10:33.640084  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.640093  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:33.640099  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:33.640159  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:33.664736  285837 cri.go:89] found id: ""
	I1213 10:10:33.664763  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.664772  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:33.664780  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:33.664839  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:33.688858  285837 cri.go:89] found id: ""
	I1213 10:10:33.688882  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.688892  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:33.688898  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:33.688955  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:33.719915  285837 cri.go:89] found id: ""
	I1213 10:10:33.719944  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.719953  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:33.719960  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:33.720015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:33.744897  285837 cri.go:89] found id: ""
	I1213 10:10:33.744927  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.744937  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:33.744943  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:33.745037  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:33.773037  285837 cri.go:89] found id: ""
	I1213 10:10:33.773059  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.773067  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:33.773073  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:33.773134  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:33.797407  285837 cri.go:89] found id: ""
	I1213 10:10:33.797433  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.797443  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:33.797449  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:33.797510  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:33.825833  285837 cri.go:89] found id: ""
	I1213 10:10:33.825859  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.825868  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:33.825877  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:33.825889  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:33.851755  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:33.851788  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:33.884360  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:33.884385  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:33.940045  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:33.940080  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:33.954004  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:33.954039  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:34.035282  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:34.013082    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.013966    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.015748    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.016268    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.020357    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:34.013082    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.013966    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.015748    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.016268    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.020357    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:36.535645  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:36.547382  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:36.547469  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:36.579677  285837 cri.go:89] found id: ""
	I1213 10:10:36.579701  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.579711  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:36.579725  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:36.579802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:36.606029  285837 cri.go:89] found id: ""
	I1213 10:10:36.606058  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.606067  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:36.606073  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:36.606134  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:36.631618  285837 cri.go:89] found id: ""
	I1213 10:10:36.631640  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.631649  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:36.631655  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:36.631712  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:36.656376  285837 cri.go:89] found id: ""
	I1213 10:10:36.656399  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.656407  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:36.656413  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:36.656469  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:36.684348  285837 cri.go:89] found id: ""
	I1213 10:10:36.684369  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.684377  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:36.684383  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:36.684443  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:36.708549  285837 cri.go:89] found id: ""
	I1213 10:10:36.708578  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.708587  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:36.708594  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:36.708653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:36.732630  285837 cri.go:89] found id: ""
	I1213 10:10:36.732659  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.732669  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:36.732677  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:36.732738  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:36.761465  285837 cri.go:89] found id: ""
	I1213 10:10:36.761493  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.761503  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:36.761513  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:36.761524  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:36.774752  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:36.774787  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:36.837540  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:36.829412    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.830021    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.831594    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.832091    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.833636    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:36.829412    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.830021    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.831594    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.832091    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.833636    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:36.837603  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:36.837625  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:36.862806  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:36.862844  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:36.893277  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:36.893302  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:39.453851  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:39.464513  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:39.464595  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:39.488288  285837 cri.go:89] found id: ""
	I1213 10:10:39.488310  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.488319  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:39.488329  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:39.488386  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:39.513054  285837 cri.go:89] found id: ""
	I1213 10:10:39.513077  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.513085  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:39.513091  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:39.513156  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:39.542442  285837 cri.go:89] found id: ""
	I1213 10:10:39.542465  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.542474  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:39.542480  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:39.542535  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:39.575244  285837 cri.go:89] found id: ""
	I1213 10:10:39.575271  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.575280  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:39.575286  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:39.575341  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:39.605371  285837 cri.go:89] found id: ""
	I1213 10:10:39.605402  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.605411  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:39.605417  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:39.605475  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:39.629581  285837 cri.go:89] found id: ""
	I1213 10:10:39.629608  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.629617  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:39.629624  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:39.629680  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:39.657061  285837 cri.go:89] found id: ""
	I1213 10:10:39.657089  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.657098  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:39.657104  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:39.657162  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:39.680815  285837 cri.go:89] found id: ""
	I1213 10:10:39.680880  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.680894  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:39.680904  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:39.680915  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:39.738790  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:39.738822  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:39.751947  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:39.751976  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:39.816341  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:39.808739    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.809296    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.810748    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.811148    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.812544    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:39.808739    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.809296    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.810748    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.811148    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.812544    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:39.816364  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:39.816376  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:39.841100  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:39.841132  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:42.369166  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:42.380009  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:42.380075  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:42.411353  285837 cri.go:89] found id: ""
	I1213 10:10:42.411380  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.411390  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:42.411397  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:42.411455  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:42.436688  285837 cri.go:89] found id: ""
	I1213 10:10:42.436718  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.436728  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:42.436734  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:42.436816  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:42.462185  285837 cri.go:89] found id: ""
	I1213 10:10:42.462211  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.462220  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:42.462226  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:42.462285  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:42.487623  285837 cri.go:89] found id: ""
	I1213 10:10:42.487647  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.487657  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:42.487663  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:42.487722  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:42.513508  285837 cri.go:89] found id: ""
	I1213 10:10:42.513534  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.513543  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:42.513549  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:42.513610  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:42.544400  285837 cri.go:89] found id: ""
	I1213 10:10:42.544424  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.544432  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:42.544439  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:42.544498  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:42.571251  285837 cri.go:89] found id: ""
	I1213 10:10:42.571281  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.571290  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:42.571297  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:42.571353  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:42.608069  285837 cri.go:89] found id: ""
	I1213 10:10:42.608094  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.608103  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:42.608113  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:42.608124  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:42.663779  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:42.663815  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:42.677800  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:42.677839  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:42.742889  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:42.733865    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.734680    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736280    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736823    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.738447    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:42.733865    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.734680    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736280    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736823    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.738447    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:42.742913  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:42.742927  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:42.769648  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:42.769682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:45.299918  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:45.313054  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:45.313153  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:45.339870  285837 cri.go:89] found id: ""
	I1213 10:10:45.339904  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.339914  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:45.339935  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:45.340013  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:45.364702  285837 cri.go:89] found id: ""
	I1213 10:10:45.364736  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.364746  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:45.364752  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:45.364815  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:45.389159  285837 cri.go:89] found id: ""
	I1213 10:10:45.389189  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.389200  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:45.389206  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:45.389286  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:45.413889  285837 cri.go:89] found id: ""
	I1213 10:10:45.413918  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.413927  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:45.413933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:45.414000  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:45.438849  285837 cri.go:89] found id: ""
	I1213 10:10:45.438885  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.438895  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:45.438901  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:45.438962  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:45.469093  285837 cri.go:89] found id: ""
	I1213 10:10:45.469116  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.469124  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:45.469130  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:45.469233  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:45.493365  285837 cri.go:89] found id: ""
	I1213 10:10:45.493391  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.493401  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:45.493408  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:45.493465  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:45.517810  285837 cri.go:89] found id: ""
	I1213 10:10:45.517839  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.517848  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:45.517858  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:45.517870  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:45.532750  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:45.532781  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:45.610253  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:45.601970    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.602367    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604011    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604678    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.606346    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:45.601970    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.602367    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604011    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604678    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.606346    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:45.610276  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:45.610289  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:45.635170  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:45.635201  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:45.662649  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:45.662727  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:48.218853  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:48.230454  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:48.230539  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:48.256210  285837 cri.go:89] found id: ""
	I1213 10:10:48.256235  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.256244  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:48.256250  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:48.256311  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:48.288857  285837 cri.go:89] found id: ""
	I1213 10:10:48.288882  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.288891  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:48.288897  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:48.288952  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:48.317960  285837 cri.go:89] found id: ""
	I1213 10:10:48.317994  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.318020  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:48.318034  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:48.318108  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:48.347646  285837 cri.go:89] found id: ""
	I1213 10:10:48.347724  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.347738  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:48.347746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:48.347815  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:48.372818  285837 cri.go:89] found id: ""
	I1213 10:10:48.372840  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.372849  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:48.372855  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:48.372915  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:48.400208  285837 cri.go:89] found id: ""
	I1213 10:10:48.400281  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.400296  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:48.400304  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:48.400373  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:48.424245  285837 cri.go:89] found id: ""
	I1213 10:10:48.424272  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.424282  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:48.424287  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:48.424345  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:48.450041  285837 cri.go:89] found id: ""
	I1213 10:10:48.450074  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.450083  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:48.450092  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:48.450103  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:48.516704  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:48.507097    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.507702    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509433    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509994    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.511703    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:48.507097    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.507702    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509433    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509994    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.511703    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:48.516726  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:48.516739  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:48.544227  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:48.544262  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:48.581036  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:48.581067  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:48.643405  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:48.643440  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:51.157408  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:51.168232  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:51.168298  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:51.194497  285837 cri.go:89] found id: ""
	I1213 10:10:51.194533  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.194545  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:51.194552  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:51.194619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:51.219079  285837 cri.go:89] found id: ""
	I1213 10:10:51.219099  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.219107  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:51.219112  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:51.219167  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:51.244709  285837 cri.go:89] found id: ""
	I1213 10:10:51.244732  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.244740  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:51.244747  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:51.244806  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:51.284617  285837 cri.go:89] found id: ""
	I1213 10:10:51.284643  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.284651  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:51.284657  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:51.284713  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:51.314124  285837 cri.go:89] found id: ""
	I1213 10:10:51.314152  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.314162  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:51.314170  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:51.314228  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:51.346119  285837 cri.go:89] found id: ""
	I1213 10:10:51.346144  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.346153  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:51.346160  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:51.346218  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:51.371813  285837 cri.go:89] found id: ""
	I1213 10:10:51.371841  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.371850  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:51.371861  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:51.371918  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:51.397126  285837 cri.go:89] found id: ""
	I1213 10:10:51.397150  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.397159  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:51.397174  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:51.397216  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:51.426866  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:51.426894  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:51.483164  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:51.483196  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:51.497003  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:51.497028  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:51.582114  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:51.573287    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.574073    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.575716    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.576298    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.577879    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:51.573287    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.574073    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.575716    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.576298    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.577879    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:51.582138  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:51.582151  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:54.110647  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:54.121581  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:54.121653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:54.145568  285837 cri.go:89] found id: ""
	I1213 10:10:54.145591  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.145600  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:54.145606  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:54.145667  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:54.171162  285837 cri.go:89] found id: ""
	I1213 10:10:54.171186  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.171195  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:54.171202  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:54.171258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:54.196117  285837 cri.go:89] found id: ""
	I1213 10:10:54.196140  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.196148  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:54.196154  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:54.196211  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:54.221183  285837 cri.go:89] found id: ""
	I1213 10:10:54.221226  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.221236  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:54.221243  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:54.221300  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:54.246527  285837 cri.go:89] found id: ""
	I1213 10:10:54.246569  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.246578  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:54.246585  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:54.246648  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:54.273839  285837 cri.go:89] found id: ""
	I1213 10:10:54.273866  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.273875  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:54.273881  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:54.273936  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:54.305443  285837 cri.go:89] found id: ""
	I1213 10:10:54.305468  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.305477  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:54.305483  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:54.305566  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:54.337568  285837 cri.go:89] found id: ""
	I1213 10:10:54.337634  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.337649  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:54.337659  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:54.337671  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:54.394420  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:54.394456  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:54.408137  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:54.408167  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:54.476257  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:54.467629    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.468321    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470154    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470801    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.472389    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:54.467629    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.468321    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470154    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470801    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.472389    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:54.476279  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:54.476294  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:54.501779  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:54.501818  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:57.039708  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:57.051575  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:57.051656  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:57.077147  285837 cri.go:89] found id: ""
	I1213 10:10:57.077171  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.077180  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:57.077186  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:57.077249  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:57.100638  285837 cri.go:89] found id: ""
	I1213 10:10:57.100662  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.100672  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:57.100679  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:57.100736  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:57.124849  285837 cri.go:89] found id: ""
	I1213 10:10:57.124872  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.124880  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:57.124886  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:57.124942  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:57.149947  285837 cri.go:89] found id: ""
	I1213 10:10:57.149970  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.149979  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:57.149985  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:57.150041  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:57.177921  285837 cri.go:89] found id: ""
	I1213 10:10:57.177944  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.177952  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:57.177958  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:57.178015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:57.202761  285837 cri.go:89] found id: ""
	I1213 10:10:57.202785  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.202793  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:57.202799  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:57.202861  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:57.232853  285837 cri.go:89] found id: ""
	I1213 10:10:57.232880  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.232890  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:57.232896  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:57.232958  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:57.257698  285837 cri.go:89] found id: ""
	I1213 10:10:57.257725  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.257734  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:57.257744  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:57.257754  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:57.284012  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:57.284084  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:57.318707  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:57.318744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:57.380534  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:57.380571  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:57.394671  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:57.394704  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:57.463198  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:57.453865    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.454679    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.456385    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.457080    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.458704    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:57.453865    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.454679    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.456385    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.457080    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.458704    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:59.963429  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:59.974005  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:59.974074  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:00.002819  285837 cri.go:89] found id: ""
	I1213 10:11:00.002842  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.002853  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:00.002860  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:00.002927  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:00.094025  285837 cri.go:89] found id: ""
	I1213 10:11:00.094053  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.094064  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:00.094071  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:00.094142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:00.174313  285837 cri.go:89] found id: ""
	I1213 10:11:00.174336  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.174345  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:00.174352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:00.174417  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:00.249900  285837 cri.go:89] found id: ""
	I1213 10:11:00.249939  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.249949  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:00.249968  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:00.250053  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:00.326093  285837 cri.go:89] found id: ""
	I1213 10:11:00.326121  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.326130  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:00.326138  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:00.326207  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:00.398659  285837 cri.go:89] found id: ""
	I1213 10:11:00.398685  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.398695  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:00.398702  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:00.398771  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:00.438080  285837 cri.go:89] found id: ""
	I1213 10:11:00.438106  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.438116  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:00.438123  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:00.438200  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:00.466610  285837 cri.go:89] found id: ""
	I1213 10:11:00.466635  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.466644  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:00.466655  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:00.466668  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:00.524796  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:00.524832  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:00.541430  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:00.541461  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:00.620210  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:00.611464    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.612064    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.613626    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.614181    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.615780    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:00.611464    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.612064    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.613626    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.614181    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.615780    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:00.620234  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:00.620248  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:00.646443  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:00.646481  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:03.175597  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:03.187100  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:03.187169  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:03.213075  285837 cri.go:89] found id: ""
	I1213 10:11:03.213099  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.213108  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:03.213114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:03.213173  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:03.238387  285837 cri.go:89] found id: ""
	I1213 10:11:03.238413  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.238422  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:03.238428  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:03.238485  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:03.263021  285837 cri.go:89] found id: ""
	I1213 10:11:03.263047  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.263057  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:03.263064  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:03.263120  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:03.287967  285837 cri.go:89] found id: ""
	I1213 10:11:03.287990  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.287999  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:03.288005  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:03.288070  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:03.313649  285837 cri.go:89] found id: ""
	I1213 10:11:03.313676  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.313685  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:03.313691  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:03.313782  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:03.341329  285837 cri.go:89] found id: ""
	I1213 10:11:03.341395  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.341410  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:03.341418  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:03.341480  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:03.367350  285837 cri.go:89] found id: ""
	I1213 10:11:03.367376  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.367386  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:03.367392  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:03.367450  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:03.394523  285837 cri.go:89] found id: ""
	I1213 10:11:03.394548  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.394556  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:03.394566  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:03.394579  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:03.408418  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:03.408444  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:03.481932  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:03.473186    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.474065    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.475730    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.476279    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.477971    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:03.473186    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.474065    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.475730    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.476279    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.477971    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:03.481953  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:03.481965  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:03.508165  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:03.508197  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:03.564104  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:03.564135  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:06.137748  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:06.148529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:06.148601  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:06.173118  285837 cri.go:89] found id: ""
	I1213 10:11:06.173142  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.173151  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:06.173164  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:06.173225  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:06.198710  285837 cri.go:89] found id: ""
	I1213 10:11:06.198732  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.198741  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:06.198747  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:06.198802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:06.224139  285837 cri.go:89] found id: ""
	I1213 10:11:06.224163  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.224171  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:06.224183  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:06.224246  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:06.249528  285837 cri.go:89] found id: ""
	I1213 10:11:06.249553  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.249568  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:06.249577  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:06.249636  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:06.283856  285837 cri.go:89] found id: ""
	I1213 10:11:06.283886  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.283894  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:06.283901  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:06.283964  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:06.307922  285837 cri.go:89] found id: ""
	I1213 10:11:06.307947  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.307956  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:06.307963  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:06.308020  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:06.332705  285837 cri.go:89] found id: ""
	I1213 10:11:06.332731  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.332739  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:06.332746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:06.332805  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:06.358646  285837 cri.go:89] found id: ""
	I1213 10:11:06.358672  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.358681  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:06.358691  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:06.358702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:06.414726  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:06.414763  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:06.428830  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:06.428866  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:06.495345  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:06.486296    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.486912    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.488721    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.489058    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.490614    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:06.486296    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.486912    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.488721    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.489058    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.490614    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:06.495373  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:06.495386  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:06.523314  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:06.523359  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:09.076696  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:09.087477  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:09.087569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:09.111658  285837 cri.go:89] found id: ""
	I1213 10:11:09.111681  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.111690  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:09.111696  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:09.111759  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:09.135775  285837 cri.go:89] found id: ""
	I1213 10:11:09.135801  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.135809  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:09.135816  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:09.135872  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:09.165477  285837 cri.go:89] found id: ""
	I1213 10:11:09.165500  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.165514  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:09.165520  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:09.165576  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:09.194399  285837 cri.go:89] found id: ""
	I1213 10:11:09.194421  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.194437  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:09.194446  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:09.194503  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:09.223486  285837 cri.go:89] found id: ""
	I1213 10:11:09.223508  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.223537  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:09.223544  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:09.223603  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:09.252819  285837 cri.go:89] found id: ""
	I1213 10:11:09.252842  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.252851  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:09.252857  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:09.252916  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:09.277570  285837 cri.go:89] found id: ""
	I1213 10:11:09.277641  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.277656  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:09.277666  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:09.277729  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:09.302629  285837 cri.go:89] found id: ""
	I1213 10:11:09.302652  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.302661  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:09.302671  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:09.302682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:09.358773  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:09.358811  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:09.372815  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:09.372842  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:09.441717  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:09.432460    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.433025    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.434726    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.435463    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.437132    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:09.432460    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.433025    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.434726    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.435463    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.437132    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:09.441793  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:09.441822  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:09.466485  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:09.466517  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:11.993817  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:12.018615  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:12.018690  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:12.044911  285837 cri.go:89] found id: ""
	I1213 10:11:12.044934  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.044943  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:12.044949  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:12.045013  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:12.069918  285837 cri.go:89] found id: ""
	I1213 10:11:12.069940  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.069949  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:12.069955  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:12.070018  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:12.094440  285837 cri.go:89] found id: ""
	I1213 10:11:12.094461  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.094470  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:12.094476  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:12.094530  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:12.118079  285837 cri.go:89] found id: ""
	I1213 10:11:12.118099  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.118108  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:12.118114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:12.118169  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:12.145090  285837 cri.go:89] found id: ""
	I1213 10:11:12.145115  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.145125  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:12.145131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:12.145186  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:12.168654  285837 cri.go:89] found id: ""
	I1213 10:11:12.168725  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.168749  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:12.168762  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:12.168820  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:12.192603  285837 cri.go:89] found id: ""
	I1213 10:11:12.192677  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.192704  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:12.192726  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:12.192802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:12.216389  285837 cri.go:89] found id: ""
	I1213 10:11:12.216454  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.216478  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:12.216501  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:12.216517  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:12.273281  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:12.273315  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:12.286866  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:12.286903  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:12.353852  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:12.345393    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.345982    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.347576    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.348061    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.349757    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:12.345393    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.345982    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.347576    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.348061    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.349757    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:12.353884  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:12.353914  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:12.379896  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:12.379931  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:14.910354  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:14.920854  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:14.920922  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:14.946408  285837 cri.go:89] found id: ""
	I1213 10:11:14.946430  285837 logs.go:282] 0 containers: []
	W1213 10:11:14.946439  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:14.946446  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:14.946501  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:14.977293  285837 cri.go:89] found id: ""
	I1213 10:11:14.977322  285837 logs.go:282] 0 containers: []
	W1213 10:11:14.977337  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:14.977343  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:14.977414  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:15.010967  285837 cri.go:89] found id: ""
	I1213 10:11:15.011055  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.011079  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:15.011098  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:15.011201  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:15.050270  285837 cri.go:89] found id: ""
	I1213 10:11:15.050294  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.050314  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:15.050321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:15.050387  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:15.076902  285837 cri.go:89] found id: ""
	I1213 10:11:15.076927  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.076936  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:15.076943  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:15.077003  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:15.106349  285837 cri.go:89] found id: ""
	I1213 10:11:15.106379  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.106389  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:15.106395  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:15.106458  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:15.134472  285837 cri.go:89] found id: ""
	I1213 10:11:15.134497  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.134506  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:15.134512  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:15.134569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:15.161713  285837 cri.go:89] found id: ""
	I1213 10:11:15.161740  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.161750  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:15.161759  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:15.161773  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:15.217480  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:15.217512  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:15.231189  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:15.231217  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:15.304481  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:15.289570    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.290198    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298086    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298782    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.300522    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:15.289570    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.290198    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298086    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298782    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.300522    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:15.304502  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:15.304515  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:15.329819  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:15.329853  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:17.857044  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:17.868755  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:17.868830  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:17.892866  285837 cri.go:89] found id: ""
	I1213 10:11:17.892890  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.892900  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:17.892906  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:17.892969  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:17.918428  285837 cri.go:89] found id: ""
	I1213 10:11:17.918450  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.918459  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:17.918467  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:17.918520  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:17.941924  285837 cri.go:89] found id: ""
	I1213 10:11:17.941945  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.941953  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:17.941959  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:17.942015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:17.966130  285837 cri.go:89] found id: ""
	I1213 10:11:17.966153  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.966162  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:17.966168  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:17.966266  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:17.994412  285837 cri.go:89] found id: ""
	I1213 10:11:17.994437  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.994446  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:17.994452  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:17.994509  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:18.020369  285837 cri.go:89] found id: ""
	I1213 10:11:18.020392  285837 logs.go:282] 0 containers: []
	W1213 10:11:18.020401  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:18.020407  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:18.020485  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:18.047590  285837 cri.go:89] found id: ""
	I1213 10:11:18.047614  285837 logs.go:282] 0 containers: []
	W1213 10:11:18.047623  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:18.047629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:18.047689  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:18.074433  285837 cri.go:89] found id: ""
	I1213 10:11:18.074456  285837 logs.go:282] 0 containers: []
	W1213 10:11:18.074465  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:18.074475  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:18.074487  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:18.101094  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:18.101129  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:18.129666  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:18.129695  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:18.185620  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:18.185652  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:18.199477  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:18.199503  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:18.264408  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:18.255885    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.256623    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258349    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258851    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.260403    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:18.255885    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.256623    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258349    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258851    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.260403    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:20.765401  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:20.778692  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:20.778759  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:20.856776  285837 cri.go:89] found id: ""
	I1213 10:11:20.856798  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.856807  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:20.856813  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:20.856871  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:20.886867  285837 cri.go:89] found id: ""
	I1213 10:11:20.886896  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.886912  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:20.886918  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:20.886992  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:20.915220  285837 cri.go:89] found id: ""
	I1213 10:11:20.915245  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.915254  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:20.915260  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:20.915318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:20.939562  285837 cri.go:89] found id: ""
	I1213 10:11:20.939585  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.939594  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:20.939600  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:20.939667  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:20.964172  285837 cri.go:89] found id: ""
	I1213 10:11:20.964195  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.964204  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:20.964210  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:20.964269  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:20.989184  285837 cri.go:89] found id: ""
	I1213 10:11:20.989206  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.989215  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:20.989221  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:20.989287  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:21.015584  285837 cri.go:89] found id: ""
	I1213 10:11:21.015608  285837 logs.go:282] 0 containers: []
	W1213 10:11:21.015616  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:21.015623  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:21.015692  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:21.041789  285837 cri.go:89] found id: ""
	I1213 10:11:21.041812  285837 logs.go:282] 0 containers: []
	W1213 10:11:21.041820  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:21.041829  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:21.041842  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:21.055424  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:21.055450  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:21.119438  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:21.111338    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.112051    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.113609    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.114084    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.115730    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:21.111338    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.112051    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.113609    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.114084    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.115730    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:21.119456  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:21.119469  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:21.144678  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:21.144713  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:21.177284  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:21.177313  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:23.742410  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:23.752527  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:23.752601  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:23.812957  285837 cri.go:89] found id: ""
	I1213 10:11:23.812979  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.812987  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:23.812994  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:23.813052  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:23.858208  285837 cri.go:89] found id: ""
	I1213 10:11:23.858236  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.858246  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:23.858253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:23.858315  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:23.885293  285837 cri.go:89] found id: ""
	I1213 10:11:23.885318  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.885328  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:23.885334  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:23.885396  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:23.911374  285837 cri.go:89] found id: ""
	I1213 10:11:23.911399  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.911409  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:23.911541  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:23.911621  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:23.940586  285837 cri.go:89] found id: ""
	I1213 10:11:23.940611  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.940620  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:23.940625  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:23.940683  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:23.965387  285837 cri.go:89] found id: ""
	I1213 10:11:23.965413  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.965423  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:23.965430  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:23.965491  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:23.989910  285837 cri.go:89] found id: ""
	I1213 10:11:23.989936  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.989945  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:23.989952  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:23.990009  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:24.016511  285837 cri.go:89] found id: ""
	I1213 10:11:24.016539  285837 logs.go:282] 0 containers: []
	W1213 10:11:24.016548  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:24.016558  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:24.016569  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:24.076500  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:24.076542  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:24.090891  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:24.090920  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:24.158444  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:24.150405    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.151121    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.152642    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.153107    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.154567    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:24.150405    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.151121    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.152642    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.153107    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.154567    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:24.158466  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:24.158478  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:24.184352  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:24.184389  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:26.715866  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:26.726291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:26.726358  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:26.749726  285837 cri.go:89] found id: ""
	I1213 10:11:26.749748  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.749757  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:26.749763  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:26.749820  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:26.798311  285837 cri.go:89] found id: ""
	I1213 10:11:26.798333  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.798341  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:26.798347  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:26.798403  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:26.855482  285837 cri.go:89] found id: ""
	I1213 10:11:26.855506  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.855541  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:26.855548  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:26.855606  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:26.887763  285837 cri.go:89] found id: ""
	I1213 10:11:26.887833  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.887857  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:26.887876  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:26.887963  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:26.913160  285837 cri.go:89] found id: ""
	I1213 10:11:26.913183  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.913192  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:26.913199  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:26.913266  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:26.940887  285837 cri.go:89] found id: ""
	I1213 10:11:26.940965  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.940996  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:26.941004  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:26.941070  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:26.965212  285837 cri.go:89] found id: ""
	I1213 10:11:26.965233  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.965242  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:26.965248  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:26.965313  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:26.989687  285837 cri.go:89] found id: ""
	I1213 10:11:26.989710  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.989718  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:26.989733  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:26.989744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:27.020130  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:27.020156  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:27.075963  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:27.076001  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:27.089421  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:27.089452  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:27.154208  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:27.146008   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.146794   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148444   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148744   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.150216   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:27.146008   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.146794   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148444   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148744   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.150216   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:27.154231  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:27.154243  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:29.679077  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:29.689987  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:29.690113  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:29.719241  285837 cri.go:89] found id: ""
	I1213 10:11:29.719304  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.719318  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:29.719325  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:29.719382  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:29.745413  285837 cri.go:89] found id: ""
	I1213 10:11:29.745511  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.745533  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:29.745541  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:29.745624  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:29.789119  285837 cri.go:89] found id: ""
	I1213 10:11:29.789193  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.789228  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:29.789251  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:29.789362  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:29.865327  285837 cri.go:89] found id: ""
	I1213 10:11:29.865413  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.865429  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:29.865437  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:29.865495  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:29.890183  285837 cri.go:89] found id: ""
	I1213 10:11:29.890260  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.890283  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:29.890301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:29.890397  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:29.919549  285837 cri.go:89] found id: ""
	I1213 10:11:29.919622  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.919646  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:29.919666  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:29.919771  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:29.945219  285837 cri.go:89] found id: ""
	I1213 10:11:29.945248  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.945257  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:29.945264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:29.945364  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:29.973791  285837 cri.go:89] found id: ""
	I1213 10:11:29.973822  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.973832  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:29.973842  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:29.973870  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:30.030470  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:30.030512  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:30.047458  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:30.047559  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:30.123116  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:30.112772   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.113344   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.115682   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.116490   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.118243   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:30.112772   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.113344   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.115682   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.116490   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.118243   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:30.123215  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:30.123250  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:30.149652  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:30.149689  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:32.679599  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:32.690298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:32.690372  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:32.713694  285837 cri.go:89] found id: ""
	I1213 10:11:32.713718  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.713726  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:32.713733  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:32.713790  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:32.738621  285837 cri.go:89] found id: ""
	I1213 10:11:32.738645  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.738654  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:32.738660  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:32.738720  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:32.762830  285837 cri.go:89] found id: ""
	I1213 10:11:32.762855  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.762865  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:32.762871  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:32.762928  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:32.799422  285837 cri.go:89] found id: ""
	I1213 10:11:32.799448  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.799464  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:32.799471  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:32.799543  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:32.856726  285837 cri.go:89] found id: ""
	I1213 10:11:32.856759  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.856768  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:32.856775  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:32.856839  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:32.883319  285837 cri.go:89] found id: ""
	I1213 10:11:32.883346  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.883356  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:32.883362  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:32.883422  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:32.909028  285837 cri.go:89] found id: ""
	I1213 10:11:32.909054  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.909063  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:32.909070  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:32.909127  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:32.938657  285837 cri.go:89] found id: ""
	I1213 10:11:32.938691  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.938701  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:32.938710  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:32.938721  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:32.994400  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:32.994434  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:33.008614  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:33.008653  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:33.076509  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:33.069025   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.069462   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.070984   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.071296   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.072695   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:33.069025   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.069462   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.070984   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.071296   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.072695   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:33.076539  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:33.076553  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:33.101599  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:33.101631  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:35.629072  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:35.639660  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:35.639731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:35.664032  285837 cri.go:89] found id: ""
	I1213 10:11:35.664060  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.664068  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:35.664076  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:35.664130  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:35.692081  285837 cri.go:89] found id: ""
	I1213 10:11:35.692108  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.692118  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:35.692124  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:35.692180  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:35.717152  285837 cri.go:89] found id: ""
	I1213 10:11:35.717177  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.717186  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:35.717192  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:35.717251  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:35.741898  285837 cri.go:89] found id: ""
	I1213 10:11:35.741931  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.741940  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:35.741946  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:35.742013  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:35.766255  285837 cri.go:89] found id: ""
	I1213 10:11:35.766289  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.766298  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:35.766305  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:35.766370  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:35.829052  285837 cri.go:89] found id: ""
	I1213 10:11:35.829093  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.829104  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:35.829111  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:35.829189  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:35.872000  285837 cri.go:89] found id: ""
	I1213 10:11:35.872072  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.872085  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:35.872092  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:35.872162  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:35.897842  285837 cri.go:89] found id: ""
	I1213 10:11:35.897874  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.897883  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:35.897893  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:35.897911  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:35.955605  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:35.955640  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:35.969234  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:35.969262  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:36.035000  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:36.026108   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.026822   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.028473   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.029109   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.030769   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:36.026108   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.026822   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.028473   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.029109   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.030769   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:36.035063  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:36.035083  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:36.061000  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:36.061037  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:38.589308  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:38.599753  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:38.599818  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:38.623379  285837 cri.go:89] found id: ""
	I1213 10:11:38.623400  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.623409  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:38.623418  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:38.623476  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:38.649806  285837 cri.go:89] found id: ""
	I1213 10:11:38.649830  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.649840  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:38.649847  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:38.649908  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:38.674234  285837 cri.go:89] found id: ""
	I1213 10:11:38.674257  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.674266  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:38.674272  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:38.674334  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:38.698759  285837 cri.go:89] found id: ""
	I1213 10:11:38.698780  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.698789  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:38.698795  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:38.698851  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:38.725178  285837 cri.go:89] found id: ""
	I1213 10:11:38.725205  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.725215  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:38.725222  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:38.725281  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:38.766167  285837 cri.go:89] found id: ""
	I1213 10:11:38.766194  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.766204  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:38.766210  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:38.766265  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:38.808982  285837 cri.go:89] found id: ""
	I1213 10:11:38.809009  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.809017  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:38.809023  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:38.809080  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:38.870538  285837 cri.go:89] found id: ""
	I1213 10:11:38.870560  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.870568  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:38.870578  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:38.870589  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:38.928916  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:38.928958  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:38.943274  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:38.943304  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:39.011182  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:38.999196   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:38.999883   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.001952   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.004117   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.005416   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:38.999196   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:38.999883   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.001952   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.004117   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.005416   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:39.011208  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:39.011223  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:39.038343  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:39.038377  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:41.571555  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:41.582245  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:41.582319  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:41.609447  285837 cri.go:89] found id: ""
	I1213 10:11:41.609473  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.609483  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:41.609490  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:41.609546  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:41.637801  285837 cri.go:89] found id: ""
	I1213 10:11:41.637823  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.637832  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:41.637838  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:41.637901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:41.661762  285837 cri.go:89] found id: ""
	I1213 10:11:41.661786  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.661795  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:41.661801  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:41.661865  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:41.685944  285837 cri.go:89] found id: ""
	I1213 10:11:41.685966  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.685981  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:41.685987  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:41.686044  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:41.710847  285837 cri.go:89] found id: ""
	I1213 10:11:41.710874  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.710883  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:41.710889  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:41.710947  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:41.739921  285837 cri.go:89] found id: ""
	I1213 10:11:41.739947  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.739956  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:41.739962  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:41.740021  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:41.764216  285837 cri.go:89] found id: ""
	I1213 10:11:41.764245  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.764254  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:41.764260  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:41.764318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:41.822929  285837 cri.go:89] found id: ""
	I1213 10:11:41.822960  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.822969  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:41.822995  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:41.823012  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:41.860056  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:41.860087  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:41.916192  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:41.916225  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:41.932977  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:41.933051  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:41.996358  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:41.988135   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.988678   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990296   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990680   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.992332   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:41.988135   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.988678   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990296   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990680   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.992332   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:41.996420  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:41.996436  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:44.525380  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:44.536068  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:44.536183  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:44.561444  285837 cri.go:89] found id: ""
	I1213 10:11:44.561476  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.561485  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:44.561491  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:44.561552  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:44.586945  285837 cri.go:89] found id: ""
	I1213 10:11:44.586975  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.586985  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:44.586991  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:44.587057  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:44.612842  285837 cri.go:89] found id: ""
	I1213 10:11:44.612874  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.612885  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:44.612891  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:44.612949  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:44.638444  285837 cri.go:89] found id: ""
	I1213 10:11:44.638472  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.638482  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:44.638489  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:44.638547  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:44.664168  285837 cri.go:89] found id: ""
	I1213 10:11:44.664191  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.664200  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:44.664206  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:44.664264  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:44.693563  285837 cri.go:89] found id: ""
	I1213 10:11:44.693634  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.693659  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:44.693675  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:44.693748  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:44.719349  285837 cri.go:89] found id: ""
	I1213 10:11:44.719376  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.719385  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:44.719391  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:44.719456  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:44.744438  285837 cri.go:89] found id: ""
	I1213 10:11:44.744467  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.744476  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:44.744485  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:44.744498  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:44.815232  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:44.815321  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:44.836304  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:44.836331  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:44.928422  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:44.919472   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.920276   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.921982   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.922615   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.924346   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:44.919472   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.920276   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.921982   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.922615   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.924346   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:44.928443  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:44.928456  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:44.954308  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:44.954348  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:47.482268  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:47.492724  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:47.492804  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:47.517619  285837 cri.go:89] found id: ""
	I1213 10:11:47.517646  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.517655  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:47.517661  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:47.517731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:47.543100  285837 cri.go:89] found id: ""
	I1213 10:11:47.543137  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.543150  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:47.543160  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:47.543223  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:47.573882  285837 cri.go:89] found id: ""
	I1213 10:11:47.573906  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.573915  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:47.573922  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:47.573979  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:47.598649  285837 cri.go:89] found id: ""
	I1213 10:11:47.598676  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.598685  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:47.598692  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:47.598753  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:47.629998  285837 cri.go:89] found id: ""
	I1213 10:11:47.630034  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.630048  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:47.630056  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:47.630135  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:47.658608  285837 cri.go:89] found id: ""
	I1213 10:11:47.658652  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.658662  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:47.658669  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:47.658739  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:47.685293  285837 cri.go:89] found id: ""
	I1213 10:11:47.685337  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.685346  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:47.685352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:47.685419  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:47.711048  285837 cri.go:89] found id: ""
	I1213 10:11:47.711072  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.711081  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:47.711091  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:47.711102  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:47.774561  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:47.774611  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:47.814155  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:47.814228  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:47.909982  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:47.900447   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.900974   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.902686   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.903281   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.905051   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:47.900447   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.900974   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.902686   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.903281   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.905051   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:47.910015  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:47.910028  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:47.938465  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:47.938502  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:50.475972  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:50.488352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:50.488421  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:50.513516  285837 cri.go:89] found id: ""
	I1213 10:11:50.513548  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.513558  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:50.513565  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:50.513619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:50.538473  285837 cri.go:89] found id: ""
	I1213 10:11:50.538498  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.538507  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:50.538513  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:50.538569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:50.562753  285837 cri.go:89] found id: ""
	I1213 10:11:50.562775  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.562784  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:50.562790  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:50.562844  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:50.587561  285837 cri.go:89] found id: ""
	I1213 10:11:50.587587  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.587597  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:50.587603  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:50.587658  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:50.612019  285837 cri.go:89] found id: ""
	I1213 10:11:50.612048  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.612058  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:50.612064  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:50.612123  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:50.636935  285837 cri.go:89] found id: ""
	I1213 10:11:50.636959  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.636967  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:50.636973  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:50.637034  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:50.661053  285837 cri.go:89] found id: ""
	I1213 10:11:50.661076  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.661085  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:50.661091  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:50.661148  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:50.690108  285837 cri.go:89] found id: ""
	I1213 10:11:50.690178  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.690201  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:50.690223  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:50.690262  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:50.748741  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:50.748775  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:50.762458  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:50.762490  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:50.892763  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:50.884204   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.884596   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886208   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886796   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.888413   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:50.884204   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.884596   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886208   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886796   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.888413   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:50.892783  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:50.892796  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:50.918206  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:50.918240  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:53.447378  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:53.457486  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:53.457551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:53.482258  285837 cri.go:89] found id: ""
	I1213 10:11:53.482283  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.482292  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:53.482299  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:53.482357  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:53.511304  285837 cri.go:89] found id: ""
	I1213 10:11:53.511330  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.511339  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:53.511345  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:53.511405  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:53.540251  285837 cri.go:89] found id: ""
	I1213 10:11:53.540277  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.540286  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:53.540291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:53.540349  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:53.565753  285837 cri.go:89] found id: ""
	I1213 10:11:53.565781  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.565791  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:53.565797  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:53.565855  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:53.595124  285837 cri.go:89] found id: ""
	I1213 10:11:53.595151  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.595160  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:53.595166  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:53.595224  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:53.620269  285837 cri.go:89] found id: ""
	I1213 10:11:53.620293  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.620302  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:53.620311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:53.620369  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:53.645281  285837 cri.go:89] found id: ""
	I1213 10:11:53.645309  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.645318  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:53.645325  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:53.645388  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:53.670326  285837 cri.go:89] found id: ""
	I1213 10:11:53.670351  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.670360  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:53.670369  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:53.670386  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:53.726845  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:53.726879  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:53.740167  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:53.740194  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:53.843634  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:53.828906   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830317   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830864   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.835449   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.836015   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:53.828906   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830317   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830864   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.835449   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.836015   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:53.843657  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:53.843669  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:53.870910  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:53.870995  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:56.405428  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:56.415940  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:56.416016  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:56.449974  285837 cri.go:89] found id: ""
	I1213 10:11:56.449996  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.450004  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:56.450010  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:56.450069  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:56.474847  285837 cri.go:89] found id: ""
	I1213 10:11:56.474873  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.474882  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:56.474888  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:56.474946  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:56.504742  285837 cri.go:89] found id: ""
	I1213 10:11:56.504768  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.504777  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:56.504783  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:56.504841  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:56.529471  285837 cri.go:89] found id: ""
	I1213 10:11:56.529493  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.529502  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:56.529509  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:56.529569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:56.553719  285837 cri.go:89] found id: ""
	I1213 10:11:56.553740  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.553749  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:56.553755  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:56.553812  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:56.579917  285837 cri.go:89] found id: ""
	I1213 10:11:56.579942  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.579950  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:56.579957  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:56.580015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:56.603606  285837 cri.go:89] found id: ""
	I1213 10:11:56.603629  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.603638  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:56.603644  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:56.603702  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:56.628438  285837 cri.go:89] found id: ""
	I1213 10:11:56.628460  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.628469  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:56.628479  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:56.628491  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:56.655218  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:56.655245  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:56.711105  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:56.711138  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:56.724564  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:56.724597  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:56.800105  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:56.791391   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.792277   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.793993   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.794273   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.795851   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:56.791391   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.792277   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.793993   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.794273   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.795851   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:56.800126  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:56.800141  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:59.341824  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:59.351965  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:59.352032  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:59.376522  285837 cri.go:89] found id: ""
	I1213 10:11:59.376544  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.376553  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:59.376559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:59.376623  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:59.405422  285837 cri.go:89] found id: ""
	I1213 10:11:59.405497  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.405522  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:59.405537  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:59.405608  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:59.430317  285837 cri.go:89] found id: ""
	I1213 10:11:59.430344  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.430353  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:59.430359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:59.430417  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:59.457827  285837 cri.go:89] found id: ""
	I1213 10:11:59.457854  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.457862  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:59.457868  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:59.457924  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:59.483234  285837 cri.go:89] found id: ""
	I1213 10:11:59.483261  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.483270  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:59.483277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:59.483337  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:59.508270  285837 cri.go:89] found id: ""
	I1213 10:11:59.508296  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.508314  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:59.508322  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:59.508379  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:59.532819  285837 cri.go:89] found id: ""
	I1213 10:11:59.532842  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.532851  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:59.532857  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:59.532913  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:59.556482  285837 cri.go:89] found id: ""
	I1213 10:11:59.556508  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.556517  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:59.556527  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:59.556540  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:59.611281  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:59.611315  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:59.624666  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:59.624694  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:59.690085  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:59.681680   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.682124   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.683696   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.684084   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.685716   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:59.681680   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.682124   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.683696   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.684084   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.685716   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:59.690108  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:59.690122  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:59.715666  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:59.715703  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:02.245206  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:02.256067  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:02.256147  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:02.280777  285837 cri.go:89] found id: ""
	I1213 10:12:02.280801  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.280809  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:02.280821  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:02.280885  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:02.305877  285837 cri.go:89] found id: ""
	I1213 10:12:02.305905  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.305914  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:02.305920  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:02.305988  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:02.330860  285837 cri.go:89] found id: ""
	I1213 10:12:02.330886  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.330894  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:02.330900  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:02.330965  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:02.356613  285837 cri.go:89] found id: ""
	I1213 10:12:02.356649  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.356659  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:02.356665  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:02.356746  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:02.388158  285837 cri.go:89] found id: ""
	I1213 10:12:02.388181  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.388190  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:02.388196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:02.388256  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:02.415431  285837 cri.go:89] found id: ""
	I1213 10:12:02.415454  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.415462  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:02.415468  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:02.415538  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:02.442554  285837 cri.go:89] found id: ""
	I1213 10:12:02.442580  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.442589  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:02.442595  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:02.442654  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:02.468134  285837 cri.go:89] found id: ""
	I1213 10:12:02.468159  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.468167  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:02.468177  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:02.468188  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:02.526799  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:02.526832  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:02.542508  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:02.542533  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:02.616614  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:02.606681   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.607231   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.609505   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.610854   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.611410   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:02.606681   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.607231   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.609505   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.610854   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.611410   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:02.616637  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:02.616650  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:02.641382  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:02.641415  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:05.169197  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:05.179948  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:05.180017  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:05.205082  285837 cri.go:89] found id: ""
	I1213 10:12:05.205105  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.205113  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:05.205119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:05.205176  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:05.234272  285837 cri.go:89] found id: ""
	I1213 10:12:05.234295  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.234305  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:05.234311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:05.234369  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:05.259024  285837 cri.go:89] found id: ""
	I1213 10:12:05.259047  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.259055  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:05.259062  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:05.259120  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:05.287223  285837 cri.go:89] found id: ""
	I1213 10:12:05.287249  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.287257  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:05.287264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:05.287323  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:05.311741  285837 cri.go:89] found id: ""
	I1213 10:12:05.311831  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.311859  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:05.311904  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:05.312016  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:05.337137  285837 cri.go:89] found id: ""
	I1213 10:12:05.337161  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.337170  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:05.337176  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:05.337232  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:05.361938  285837 cri.go:89] found id: ""
	I1213 10:12:05.361967  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.361976  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:05.361982  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:05.362063  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:05.387423  285837 cri.go:89] found id: ""
	I1213 10:12:05.387460  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.387469  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:05.387478  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:05.387489  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:05.446385  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:05.446423  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:05.460052  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:05.460075  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:05.534925  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:05.526612   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.527034   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.528674   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.529348   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.530958   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:05.526612   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.527034   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.528674   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.529348   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.530958   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:05.534954  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:05.534969  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:05.561237  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:05.561278  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:08.090523  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:08.103723  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:08.103793  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:08.130437  285837 cri.go:89] found id: ""
	I1213 10:12:08.130464  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.130473  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:08.130479  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:08.130536  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:08.158259  285837 cri.go:89] found id: ""
	I1213 10:12:08.158286  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.158295  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:08.158301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:08.158359  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:08.183457  285837 cri.go:89] found id: ""
	I1213 10:12:08.183484  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.183493  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:08.183499  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:08.183589  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:08.207480  285837 cri.go:89] found id: ""
	I1213 10:12:08.207507  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.207613  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:08.207620  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:08.207681  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:08.231959  285837 cri.go:89] found id: ""
	I1213 10:12:08.232037  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.232053  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:08.232061  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:08.232131  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:08.255921  285837 cri.go:89] found id: ""
	I1213 10:12:08.255986  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.256003  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:08.256010  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:08.256074  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:08.280187  285837 cri.go:89] found id: ""
	I1213 10:12:08.280254  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.280269  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:08.280276  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:08.280332  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:08.308900  285837 cri.go:89] found id: ""
	I1213 10:12:08.308974  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.308997  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:08.309014  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:08.309029  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:08.322959  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:08.322986  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:08.387674  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:08.378725   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.379335   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.380972   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.381487   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.383076   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:08.378725   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.379335   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.380972   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.381487   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.383076   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:08.387701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:08.387715  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:08.413378  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:08.413414  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:08.444856  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:08.444888  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:11.000292  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:11.012216  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:11.012287  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:11.063803  285837 cri.go:89] found id: ""
	I1213 10:12:11.063829  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.063838  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:11.063845  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:11.063910  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:11.103072  285837 cri.go:89] found id: ""
	I1213 10:12:11.103099  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.103109  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:11.103115  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:11.103171  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:11.138581  285837 cri.go:89] found id: ""
	I1213 10:12:11.138606  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.138614  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:11.138631  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:11.138686  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:11.163663  285837 cri.go:89] found id: ""
	I1213 10:12:11.163735  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.163760  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:11.163779  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:11.163862  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:11.188635  285837 cri.go:89] found id: ""
	I1213 10:12:11.188701  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.188716  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:11.188722  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:11.188779  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:11.217597  285837 cri.go:89] found id: ""
	I1213 10:12:11.217620  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.217628  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:11.217634  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:11.217690  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:11.241986  285837 cri.go:89] found id: ""
	I1213 10:12:11.242009  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.242017  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:11.242023  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:11.242078  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:11.266556  285837 cri.go:89] found id: ""
	I1213 10:12:11.266578  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.266586  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:11.266596  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:11.266607  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:11.298567  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:11.298592  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:11.354117  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:11.354151  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:11.367112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:11.367187  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:11.430754  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:11.422695   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.423276   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.424837   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.425327   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.426819   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:11.422695   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.423276   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.424837   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.425327   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.426819   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:11.430832  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:11.430859  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:13.957251  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:13.968979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:13.969058  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:13.994304  285837 cri.go:89] found id: ""
	I1213 10:12:13.994326  285837 logs.go:282] 0 containers: []
	W1213 10:12:13.994334  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:13.994341  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:13.994396  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:14.032552  285837 cri.go:89] found id: ""
	I1213 10:12:14.032584  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.032593  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:14.032600  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:14.032663  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:14.104797  285837 cri.go:89] found id: ""
	I1213 10:12:14.104823  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.104833  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:14.104839  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:14.104901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:14.130796  285837 cri.go:89] found id: ""
	I1213 10:12:14.130821  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.130831  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:14.130837  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:14.130892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:14.157587  285837 cri.go:89] found id: ""
	I1213 10:12:14.157616  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.157625  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:14.157631  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:14.157689  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:14.183166  285837 cri.go:89] found id: ""
	I1213 10:12:14.183191  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.183199  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:14.183205  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:14.183271  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:14.207844  285837 cri.go:89] found id: ""
	I1213 10:12:14.207871  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.207880  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:14.207886  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:14.207943  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:14.232398  285837 cri.go:89] found id: ""
	I1213 10:12:14.232420  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.232429  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:14.232438  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:14.232450  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:14.263838  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:14.263869  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:14.322835  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:14.322870  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:14.336577  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:14.336609  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:14.404961  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:14.396535   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.397107   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.398835   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.399389   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.400964   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:14.396535   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.397107   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.398835   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.399389   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.400964   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:14.405007  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:14.405047  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:16.930423  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:16.941126  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:16.941197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:16.968990  285837 cri.go:89] found id: ""
	I1213 10:12:16.969013  285837 logs.go:282] 0 containers: []
	W1213 10:12:16.969023  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:16.969029  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:16.969093  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:16.994277  285837 cri.go:89] found id: ""
	I1213 10:12:16.994298  285837 logs.go:282] 0 containers: []
	W1213 10:12:16.994307  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:16.994319  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:16.994374  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:17.052160  285837 cri.go:89] found id: ""
	I1213 10:12:17.052187  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.052196  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:17.052202  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:17.052260  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:17.112056  285837 cri.go:89] found id: ""
	I1213 10:12:17.112122  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.112136  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:17.112142  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:17.112201  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:17.137264  285837 cri.go:89] found id: ""
	I1213 10:12:17.137287  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.137295  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:17.137301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:17.137356  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:17.161759  285837 cri.go:89] found id: ""
	I1213 10:12:17.161780  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.161802  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:17.161808  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:17.161864  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:17.187256  285837 cri.go:89] found id: ""
	I1213 10:12:17.187288  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.187296  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:17.187302  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:17.187372  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:17.213316  285837 cri.go:89] found id: ""
	I1213 10:12:17.213380  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.213400  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:17.213413  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:17.213424  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:17.241644  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:17.241674  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:17.298584  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:17.298617  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:17.313303  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:17.313331  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:17.387719  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:17.378154   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.379329   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.380946   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.381429   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.382999   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:17.378154   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.379329   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.380946   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.381429   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.382999   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:17.387742  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:17.387755  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:19.919282  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:19.929646  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:19.929711  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:19.961717  285837 cri.go:89] found id: ""
	I1213 10:12:19.961739  285837 logs.go:282] 0 containers: []
	W1213 10:12:19.961748  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:19.961754  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:19.961811  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:19.986281  285837 cri.go:89] found id: ""
	I1213 10:12:19.986306  285837 logs.go:282] 0 containers: []
	W1213 10:12:19.986315  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:19.986321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:19.986375  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:20.035442  285837 cri.go:89] found id: ""
	I1213 10:12:20.035468  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.035478  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:20.035484  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:20.035574  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:20.086605  285837 cri.go:89] found id: ""
	I1213 10:12:20.086627  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.086635  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:20.086642  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:20.086698  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:20.121043  285837 cri.go:89] found id: ""
	I1213 10:12:20.121065  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.121073  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:20.121079  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:20.121136  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:20.148016  285837 cri.go:89] found id: ""
	I1213 10:12:20.148083  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.148105  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:20.148124  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:20.148209  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:20.175168  285837 cri.go:89] found id: ""
	I1213 10:12:20.175234  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.175257  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:20.175276  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:20.175363  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:20.206568  285837 cri.go:89] found id: ""
	I1213 10:12:20.206590  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.206599  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:20.206608  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:20.206619  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:20.234244  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:20.234308  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:20.290937  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:20.290972  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:20.304498  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:20.304527  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:20.367763  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:20.358899   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.359439   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361173   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361714   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.363335   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:20.358899   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.359439   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361173   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361714   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.363335   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:20.367830  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:20.367849  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:22.894711  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:22.905901  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:22.905969  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:22.936437  285837 cri.go:89] found id: ""
	I1213 10:12:22.936460  285837 logs.go:282] 0 containers: []
	W1213 10:12:22.936468  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:22.936474  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:22.936533  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:22.961367  285837 cri.go:89] found id: ""
	I1213 10:12:22.961390  285837 logs.go:282] 0 containers: []
	W1213 10:12:22.961416  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:22.961425  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:22.961484  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:22.984924  285837 cri.go:89] found id: ""
	I1213 10:12:22.984949  285837 logs.go:282] 0 containers: []
	W1213 10:12:22.984958  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:22.984964  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:22.985046  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:23.012110  285837 cri.go:89] found id: ""
	I1213 10:12:23.012175  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.012191  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:23.012198  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:23.012258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:23.053789  285837 cri.go:89] found id: ""
	I1213 10:12:23.053816  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.053825  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:23.053831  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:23.053888  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:23.102082  285837 cri.go:89] found id: ""
	I1213 10:12:23.102104  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.102112  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:23.102118  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:23.102173  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:23.139793  285837 cri.go:89] found id: ""
	I1213 10:12:23.139820  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.139830  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:23.139836  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:23.139892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:23.163400  285837 cri.go:89] found id: ""
	I1213 10:12:23.163426  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.163436  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:23.163451  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:23.163464  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:23.227709  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:23.227744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:23.241604  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:23.241631  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:23.305636  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:23.297583   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.298334   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.299957   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.300270   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.301534   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:23.297583   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.298334   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.299957   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.300270   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.301534   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:23.305670  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:23.305683  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:23.331847  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:23.331879  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:25.858551  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:25.871752  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:25.871822  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:25.897476  285837 cri.go:89] found id: ""
	I1213 10:12:25.897527  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.897536  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:25.897543  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:25.897600  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:25.925782  285837 cri.go:89] found id: ""
	I1213 10:12:25.925807  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.925817  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:25.925823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:25.925906  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:25.949723  285837 cri.go:89] found id: ""
	I1213 10:12:25.949750  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.949760  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:25.949766  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:25.949842  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:25.973991  285837 cri.go:89] found id: ""
	I1213 10:12:25.974016  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.974025  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:25.974032  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:25.974107  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:26.001033  285837 cri.go:89] found id: ""
	I1213 10:12:26.001056  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.001064  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:26.001070  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:26.001144  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:26.077273  285837 cri.go:89] found id: ""
	I1213 10:12:26.077300  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.077309  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:26.077316  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:26.077397  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:26.122203  285837 cri.go:89] found id: ""
	I1213 10:12:26.122230  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.122240  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:26.122246  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:26.122346  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:26.147712  285837 cri.go:89] found id: ""
	I1213 10:12:26.147736  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.147745  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:26.147781  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:26.147799  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:26.203487  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:26.203528  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:26.217213  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:26.217246  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:26.284727  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:26.276312   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.277260   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.278746   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.279123   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.280769   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:26.276312   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.277260   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.278746   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.279123   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.280769   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:26.284751  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:26.284763  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:26.312716  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:26.312773  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:28.841875  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:28.852491  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:28.852562  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:28.881629  285837 cri.go:89] found id: ""
	I1213 10:12:28.881653  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.881662  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:28.881669  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:28.881728  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:28.906270  285837 cri.go:89] found id: ""
	I1213 10:12:28.906296  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.906306  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:28.906312  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:28.906370  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:28.931578  285837 cri.go:89] found id: ""
	I1213 10:12:28.931599  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.931607  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:28.931612  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:28.931666  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:28.957311  285837 cri.go:89] found id: ""
	I1213 10:12:28.957334  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.957343  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:28.957349  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:28.957406  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:28.981753  285837 cri.go:89] found id: ""
	I1213 10:12:28.981778  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.981787  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:28.981794  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:28.981849  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:29.006917  285837 cri.go:89] found id: ""
	I1213 10:12:29.006945  285837 logs.go:282] 0 containers: []
	W1213 10:12:29.006955  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:29.006962  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:29.007029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:29.066909  285837 cri.go:89] found id: ""
	I1213 10:12:29.066935  285837 logs.go:282] 0 containers: []
	W1213 10:12:29.066944  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:29.066950  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:29.067008  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:29.105599  285837 cri.go:89] found id: ""
	I1213 10:12:29.105625  285837 logs.go:282] 0 containers: []
	W1213 10:12:29.105633  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:29.105642  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:29.105652  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:29.130961  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:29.131003  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:29.157785  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:29.157819  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:29.213436  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:29.213472  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:29.227454  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:29.227485  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:29.298087  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:29.289850   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.290315   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.291761   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.292182   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.293636   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:29.289850   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.290315   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.291761   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.292182   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.293636   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:31.798509  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:31.809145  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:31.809221  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:31.833247  285837 cri.go:89] found id: ""
	I1213 10:12:31.833272  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.833281  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:31.833290  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:31.833348  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:31.861756  285837 cri.go:89] found id: ""
	I1213 10:12:31.861779  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.861789  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:31.861795  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:31.861851  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:31.885473  285837 cri.go:89] found id: ""
	I1213 10:12:31.885496  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.885506  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:31.885512  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:31.885566  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:31.908602  285837 cri.go:89] found id: ""
	I1213 10:12:31.908626  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.908634  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:31.908640  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:31.908695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:31.933964  285837 cri.go:89] found id: ""
	I1213 10:12:31.933990  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.933999  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:31.934005  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:31.934063  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:31.962393  285837 cri.go:89] found id: ""
	I1213 10:12:31.962416  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.962424  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:31.962431  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:31.962490  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:31.986650  285837 cri.go:89] found id: ""
	I1213 10:12:31.986676  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.986685  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:31.986692  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:31.986749  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:32.017192  285837 cri.go:89] found id: ""
	I1213 10:12:32.017220  285837 logs.go:282] 0 containers: []
	W1213 10:12:32.017229  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:32.017239  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:32.017252  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:32.035285  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:32.035316  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:32.145875  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:32.134854   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.135586   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.137228   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.140039   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.141704   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:32.134854   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.135586   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.137228   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.140039   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.141704   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:32.145896  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:32.145909  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:32.172371  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:32.172409  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:32.202803  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:32.202833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:34.759246  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:34.770746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:34.770823  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:34.798561  285837 cri.go:89] found id: ""
	I1213 10:12:34.798585  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.798594  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:34.798601  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:34.798664  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:34.824521  285837 cri.go:89] found id: ""
	I1213 10:12:34.824544  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.824553  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:34.824559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:34.824616  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:34.848643  285837 cri.go:89] found id: ""
	I1213 10:12:34.848670  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.848680  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:34.848687  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:34.848746  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:34.874242  285837 cri.go:89] found id: ""
	I1213 10:12:34.874263  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.874271  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:34.874277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:34.874331  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:34.898270  285837 cri.go:89] found id: ""
	I1213 10:12:34.898298  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.898308  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:34.898314  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:34.898374  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:34.922469  285837 cri.go:89] found id: ""
	I1213 10:12:34.922492  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.922502  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:34.922508  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:34.922565  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:34.949223  285837 cri.go:89] found id: ""
	I1213 10:12:34.949250  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.949259  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:34.949266  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:34.949320  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:34.977644  285837 cri.go:89] found id: ""
	I1213 10:12:34.977675  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.977685  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:34.977696  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:34.977707  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:35.038624  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:35.038662  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:35.079394  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:35.079475  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:35.160019  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:35.151819   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.152538   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154163   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154458   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.155960   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:35.151819   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.152538   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154163   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154458   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.155960   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:35.160066  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:35.160078  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:35.186026  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:35.186058  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:37.713450  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:37.724509  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:37.724585  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:37.748173  285837 cri.go:89] found id: ""
	I1213 10:12:37.748197  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.748206  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:37.748213  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:37.748274  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:37.772262  285837 cri.go:89] found id: ""
	I1213 10:12:37.772285  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.772294  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:37.772312  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:37.772371  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:37.797053  285837 cri.go:89] found id: ""
	I1213 10:12:37.797077  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.797086  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:37.797093  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:37.797151  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:37.821445  285837 cri.go:89] found id: ""
	I1213 10:12:37.821468  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.821477  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:37.821484  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:37.821538  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:37.848175  285837 cri.go:89] found id: ""
	I1213 10:12:37.848199  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.848208  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:37.848214  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:37.848272  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:37.882751  285837 cri.go:89] found id: ""
	I1213 10:12:37.882774  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.882784  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:37.882789  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:37.882847  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:37.907236  285837 cri.go:89] found id: ""
	I1213 10:12:37.907262  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.907271  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:37.907277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:37.907334  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:37.931030  285837 cri.go:89] found id: ""
	I1213 10:12:37.931053  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.931061  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:37.931070  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:37.931082  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:37.944201  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:37.944228  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:38.014013  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:38.001007   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.001921   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.006866   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.007610   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.009335   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:38.001007   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.001921   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.006866   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.007610   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.009335   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:38.014037  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:38.014051  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:38.050241  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:38.050336  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:38.123205  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:38.123240  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:40.686197  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:40.696710  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:40.696797  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:40.726004  285837 cri.go:89] found id: ""
	I1213 10:12:40.726031  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.726040  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:40.726046  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:40.726104  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:40.751506  285837 cri.go:89] found id: ""
	I1213 10:12:40.751558  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.751567  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:40.751573  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:40.751637  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:40.777206  285837 cri.go:89] found id: ""
	I1213 10:12:40.777232  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.777241  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:40.777247  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:40.777307  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:40.806234  285837 cri.go:89] found id: ""
	I1213 10:12:40.806256  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.806264  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:40.806270  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:40.806326  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:40.835873  285837 cri.go:89] found id: ""
	I1213 10:12:40.835898  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.835907  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:40.835913  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:40.835969  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:40.861792  285837 cri.go:89] found id: ""
	I1213 10:12:40.861821  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.861830  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:40.861836  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:40.861897  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:40.887384  285837 cri.go:89] found id: ""
	I1213 10:12:40.887409  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.887418  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:40.887424  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:40.887482  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:40.918474  285837 cri.go:89] found id: ""
	I1213 10:12:40.918499  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.918508  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:40.918518  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:40.918529  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:40.974634  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:40.974669  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:40.988450  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:40.988481  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:41.102570  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:41.091923   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.092616   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094183   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094723   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.096608   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:41.091923   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.092616   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094183   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094723   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.096608   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:41.102639  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:41.102664  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:41.132124  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:41.132159  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:43.660524  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:43.671119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:43.671190  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:43.696313  285837 cri.go:89] found id: ""
	I1213 10:12:43.696343  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.696356  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:43.696364  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:43.696422  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:43.720831  285837 cri.go:89] found id: ""
	I1213 10:12:43.720856  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.720865  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:43.720871  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:43.720930  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:43.745280  285837 cri.go:89] found id: ""
	I1213 10:12:43.745305  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.745314  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:43.745321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:43.745382  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:43.771809  285837 cri.go:89] found id: ""
	I1213 10:12:43.771832  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.771842  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:43.771848  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:43.771919  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:43.795691  285837 cri.go:89] found id: ""
	I1213 10:12:43.795715  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.795725  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:43.795731  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:43.795789  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:43.821222  285837 cri.go:89] found id: ""
	I1213 10:12:43.821246  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.821254  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:43.821261  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:43.821316  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:43.849405  285837 cri.go:89] found id: ""
	I1213 10:12:43.849428  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.849437  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:43.849450  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:43.849515  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:43.874124  285837 cri.go:89] found id: ""
	I1213 10:12:43.874150  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.874159  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:43.874167  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:43.874178  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:43.938106  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:43.929845   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.930320   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932022   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932327   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.933807   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:43.929845   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.930320   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932022   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932327   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.933807   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:43.938129  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:43.938141  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:43.963803  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:43.963838  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:43.994003  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:43.994030  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:44.069701  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:44.069786  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:46.587357  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:46.597851  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:46.597931  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:46.622018  285837 cri.go:89] found id: ""
	I1213 10:12:46.622044  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.622054  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:46.622060  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:46.622119  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:46.647494  285837 cri.go:89] found id: ""
	I1213 10:12:46.647537  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.647547  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:46.647553  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:46.647612  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:46.673199  285837 cri.go:89] found id: ""
	I1213 10:12:46.673223  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.673237  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:46.673243  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:46.673302  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:46.702715  285837 cri.go:89] found id: ""
	I1213 10:12:46.702777  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.702799  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:46.702818  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:46.702888  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:46.732013  285837 cri.go:89] found id: ""
	I1213 10:12:46.732036  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.732044  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:46.732049  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:46.732111  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:46.755882  285837 cri.go:89] found id: ""
	I1213 10:12:46.755907  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.755925  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:46.755933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:46.755993  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:46.780993  285837 cri.go:89] found id: ""
	I1213 10:12:46.781016  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.781025  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:46.781031  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:46.781094  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:46.806179  285837 cri.go:89] found id: ""
	I1213 10:12:46.806255  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.806280  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:46.806305  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:46.806342  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:46.863518  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:46.863553  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:46.877399  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:46.877428  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:46.946626  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:46.939032   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.939507   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941182   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941602   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.942800   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:46.939032   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.939507   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941182   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941602   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.942800   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:46.946696  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:46.946739  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:46.972274  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:46.972306  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:49.510021  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:49.520415  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:49.520489  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:49.544492  285837 cri.go:89] found id: ""
	I1213 10:12:49.544515  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.544524  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:49.544531  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:49.544595  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:49.574538  285837 cri.go:89] found id: ""
	I1213 10:12:49.574564  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.574573  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:49.574593  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:49.574659  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:49.603237  285837 cri.go:89] found id: ""
	I1213 10:12:49.603267  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.603277  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:49.603283  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:49.603339  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:49.627482  285837 cri.go:89] found id: ""
	I1213 10:12:49.627508  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.627547  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:49.627555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:49.627635  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:49.652503  285837 cri.go:89] found id: ""
	I1213 10:12:49.652532  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.652541  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:49.652547  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:49.652620  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:49.677443  285837 cri.go:89] found id: ""
	I1213 10:12:49.677474  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.677483  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:49.677490  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:49.677551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:49.702698  285837 cri.go:89] found id: ""
	I1213 10:12:49.702723  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.702733  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:49.702750  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:49.702813  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:49.731706  285837 cri.go:89] found id: ""
	I1213 10:12:49.731727  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.731735  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:49.731750  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:49.731762  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:49.787702  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:49.787741  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:49.801570  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:49.801602  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:49.870136  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:49.861042   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.862455   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.863332   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.864338   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.865026   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:49.861042   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.862455   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.863332   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.864338   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.865026   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:49.870158  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:49.870171  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:49.896174  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:49.896211  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:52.425030  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:52.438702  285837 out.go:203] 
	W1213 10:12:52.441528  285837 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 10:12:52.441562  285837 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 10:12:52.441572  285837 out.go:285] * Related issues:
	W1213 10:12:52.441583  285837 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 10:12:52.441596  285837 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 10:12:52.444462  285837 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139688152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139757700Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139854054Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139930494Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139999639Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140070369Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140128880Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140186801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140255347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140343226Z" level=info msg="Connect containerd service"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140691374Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.141400233Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.153338815Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.153402373Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.153439969Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.153482546Z" level=info msg="Start recovering state"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194202999Z" level=info msg="Start event monitor"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194399260Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194485793Z" level=info msg="Start streaming server"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194562487Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194779253Z" level=info msg="runtime interface starting up..."
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194850729Z" level=info msg="starting plugins..."
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194929983Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:06:50 newest-cni-987495 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.196601776Z" level=info msg="containerd successfully booted in 0.081602s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:55.619119   13473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:55.619551   13473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:55.621465   13473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:55.622138   13473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:55.623855   13473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 10:12:55 up  1:55,  0 user,  load average: 0.76, 0.65, 1.07
	Linux newest-cni-987495 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:12:52 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:12:53 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 13 10:12:53 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:12:53 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:12:53 newest-cni-987495 kubelet[13350]: E1213 10:12:53.171233   13350 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:12:53 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:12:53 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:12:54 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 13 10:12:54 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:12:54 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:12:54 newest-cni-987495 kubelet[13367]: E1213 10:12:54.089633   13367 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:12:54 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:12:54 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:12:54 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 13 10:12:54 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:12:54 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:12:54 newest-cni-987495 kubelet[13377]: E1213 10:12:54.830526   13377 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:12:54 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:12:54 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:12:55 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 13 10:12:55 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:12:55 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:12:55 newest-cni-987495 kubelet[13464]: E1213 10:12:55.584574   13464 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:12:55 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:12:55 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495: exit status 2 (412.970779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-987495" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (373.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:08:51.887706    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:09:35.553416    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:10:11.200154    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:10:58.618980    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:11:40.013142    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:12:14.443601    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:13:51.888384    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1213 10:14:30.103693    4120 config.go:182] Loaded profile config "auto-324081": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:14:35.553121    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:15:11.200166    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:16:34.267369    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:16:40.009920    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:16:57.524980    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:17:14.443177    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069: exit status 2 (401.99675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-328069" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-328069
helpers_test.go:244: (dbg) docker inspect no-preload-328069:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	        "Created": "2025-12-13T09:51:52.758803928Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 279480,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:02:12.212548985Z",
	            "FinishedAt": "2025-12-13T10:02:10.889738311Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hosts",
	        "LogPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f-json.log",
	        "Name": "/no-preload-328069",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-328069:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-328069",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	                "LowerDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-328069",
	                "Source": "/var/lib/docker/volumes/no-preload-328069/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-328069",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-328069",
	                "name.minikube.sigs.k8s.io": "no-preload-328069",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e549dceaa2628f46a792f0513237bae1c9187e2280b148782465d5223dc837ce",
	            "SandboxKey": "/var/run/docker/netns/e549dceaa262",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-328069": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:94:67:0e:78:62",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73962579d63346bb2ce9ff6da0af02a105fd1aac4b494f0d3fbfb09d208e1ce7",
	                    "EndpointID": "1f33b140f1554f462bc470ee8cae381e2b3ff6375e4e1f2dfdc3776ccc0d5791",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-328069",
	                        "fe2aab224d92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069: exit status 2 (370.585382ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-328069 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-324081 sudo systemctl status kubelet --all --full --no-pager                                                                      │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo systemctl cat kubelet --no-pager                                                                                      │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo journalctl -xeu kubelet --all --full --no-pager                                                                       │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo cat /etc/kubernetes/kubelet.conf                                                                                      │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo cat /var/lib/kubelet/config.yaml                                                                                      │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo systemctl status docker --all --full --no-pager                                                                       │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p kindnet-324081 sudo systemctl cat docker --no-pager                                                                                       │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo cat /etc/docker/daemon.json                                                                                           │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p kindnet-324081 sudo docker system info                                                                                                    │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p kindnet-324081 sudo systemctl status cri-docker --all --full --no-pager                                                                   │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p kindnet-324081 sudo systemctl cat cri-docker --no-pager                                                                                   │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                              │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p kindnet-324081 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                        │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo cri-dockerd --version                                                                                                 │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo systemctl status containerd --all --full --no-pager                                                                   │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo systemctl cat containerd --no-pager                                                                                   │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo cat /lib/systemd/system/containerd.service                                                                            │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo cat /etc/containerd/config.toml                                                                                       │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo containerd config dump                                                                                                │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo systemctl status crio --all --full --no-pager                                                                         │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │                     │
	│ ssh     │ -p kindnet-324081 sudo systemctl cat crio --no-pager                                                                                         │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                               │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ ssh     │ -p kindnet-324081 sudo crio config                                                                                                           │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ delete  │ -p kindnet-324081                                                                                                                            │ kindnet-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │ 13 Dec 25 10:16 UTC │
	│ start   │ -p calico-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd │ calico-324081  │ jenkins │ v1.37.0 │ 13 Dec 25 10:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:16:58
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:16:58.208770  318132 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:16:58.208887  318132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:16:58.208939  318132 out.go:374] Setting ErrFile to fd 2...
	I1213 10:16:58.208945  318132 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:16:58.209210  318132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 10:16:58.209641  318132 out.go:368] Setting JSON to false
	I1213 10:16:58.210646  318132 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7171,"bootTime":1765613848,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:16:58.210711  318132 start.go:143] virtualization:  
	I1213 10:16:58.214204  318132 out.go:179] * [calico-324081] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:16:58.218767  318132 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:16:58.218892  318132 notify.go:221] Checking for updates...
	I1213 10:16:58.225497  318132 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:16:58.228721  318132 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:16:58.231794  318132 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 10:16:58.234831  318132 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:16:58.237962  318132 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:16:58.241532  318132 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:16:58.241635  318132 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:16:58.279227  318132 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:16:58.279351  318132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:16:58.335350  318132 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:16:58.325849826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:16:58.335455  318132 docker.go:319] overlay module found
	I1213 10:16:58.338772  318132 out.go:179] * Using the docker driver based on user configuration
	I1213 10:16:58.341658  318132 start.go:309] selected driver: docker
	I1213 10:16:58.341674  318132 start.go:927] validating driver "docker" against <nil>
	I1213 10:16:58.341687  318132 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:16:58.342406  318132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:16:58.395304  318132 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:16:58.386255821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:16:58.395452  318132 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:16:58.395741  318132 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:16:58.398743  318132 out.go:179] * Using Docker driver with root privileges
	I1213 10:16:58.401616  318132 cni.go:84] Creating CNI manager for "calico"
	I1213 10:16:58.401644  318132 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1213 10:16:58.401722  318132 start.go:353] cluster config:
	{Name:calico-324081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-324081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:16:58.404913  318132 out.go:179] * Starting "calico-324081" primary control-plane node in "calico-324081" cluster
	I1213 10:16:58.407748  318132 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:16:58.410742  318132 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:16:58.413563  318132 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 10:16:58.413610  318132 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1213 10:16:58.413620  318132 cache.go:65] Caching tarball of preloaded images
	I1213 10:16:58.413648  318132 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:16:58.413714  318132 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:16:58.413724  318132 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 10:16:58.413820  318132 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/config.json ...
	I1213 10:16:58.413836  318132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/config.json: {Name:mk56776d13ebac428405d2bcf3020ba0bb589a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:16:58.432599  318132 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:16:58.432628  318132 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:16:58.432650  318132 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:16:58.432683  318132 start.go:360] acquireMachinesLock for calico-324081: {Name:mkb0c53ed9446fdf3cf21f276b04abcfa0d68529 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:16:58.432792  318132 start.go:364] duration metric: took 88.608µs to acquireMachinesLock for "calico-324081"
	I1213 10:16:58.432821  318132 start.go:93] Provisioning new machine with config: &{Name:calico-324081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-324081 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:16:58.432899  318132 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:16:58.437089  318132 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:16:58.437326  318132 start.go:159] libmachine.API.Create for "calico-324081" (driver="docker")
	I1213 10:16:58.437365  318132 client.go:173] LocalClient.Create starting
	I1213 10:16:58.437432  318132 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem
	I1213 10:16:58.437473  318132 main.go:143] libmachine: Decoding PEM data...
	I1213 10:16:58.437493  318132 main.go:143] libmachine: Parsing certificate...
	I1213 10:16:58.437549  318132 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem
	I1213 10:16:58.437570  318132 main.go:143] libmachine: Decoding PEM data...
	I1213 10:16:58.437587  318132 main.go:143] libmachine: Parsing certificate...
	I1213 10:16:58.437941  318132 cli_runner.go:164] Run: docker network inspect calico-324081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:16:58.454600  318132 cli_runner.go:211] docker network inspect calico-324081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:16:58.454676  318132 network_create.go:284] running [docker network inspect calico-324081] to gather additional debugging logs...
	I1213 10:16:58.454694  318132 cli_runner.go:164] Run: docker network inspect calico-324081
	W1213 10:16:58.471155  318132 cli_runner.go:211] docker network inspect calico-324081 returned with exit code 1
	I1213 10:16:58.471184  318132 network_create.go:287] error running [docker network inspect calico-324081]: docker network inspect calico-324081: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-324081 not found
	I1213 10:16:58.471200  318132 network_create.go:289] output of [docker network inspect calico-324081]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-324081 not found
	
	** /stderr **
	I1213 10:16:58.471294  318132 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:16:58.489126  318132 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c365c32601a1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:cc:58:ff:fb:ac} reservation:<nil>}
	I1213 10:16:58.489505  318132 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5d2e6ae00d26 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:06:4d:2a:bb:ba} reservation:<nil>}
	I1213 10:16:58.489831  318132 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f516d686012e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:e6:44:cc:3a:d0} reservation:<nil>}
	I1213 10:16:58.490123  318132 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-73962579d633 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:d5:99:41:ca:7d} reservation:<nil>}
	I1213 10:16:58.490538  318132 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1b990}
	I1213 10:16:58.490567  318132 network_create.go:124] attempt to create docker network calico-324081 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 10:16:58.490621  318132 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-324081 calico-324081
	I1213 10:16:58.546193  318132 network_create.go:108] docker network calico-324081 192.168.85.0/24 created
	I1213 10:16:58.546229  318132 kic.go:121] calculated static IP "192.168.85.2" for the "calico-324081" container
	I1213 10:16:58.546322  318132 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:16:58.563007  318132 cli_runner.go:164] Run: docker volume create calico-324081 --label name.minikube.sigs.k8s.io=calico-324081 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:16:58.580832  318132 oci.go:103] Successfully created a docker volume calico-324081
	I1213 10:16:58.580928  318132 cli_runner.go:164] Run: docker run --rm --name calico-324081-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-324081 --entrypoint /usr/bin/test -v calico-324081:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:16:59.090815  318132 oci.go:107] Successfully prepared a docker volume calico-324081
	I1213 10:16:59.090884  318132 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 10:16:59.090905  318132 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:16:59.090966  318132 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-324081:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:17:03.575962  318132 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-324081:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.48495097s)
	I1213 10:17:03.576002  318132 kic.go:203] duration metric: took 4.485092534s to extract preloaded images to volume ...
	W1213 10:17:03.576187  318132 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 10:17:03.576330  318132 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:17:03.643474  318132 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-324081 --name calico-324081 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-324081 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-324081 --network calico-324081 --ip 192.168.85.2 --volume calico-324081:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:17:03.975355  318132 cli_runner.go:164] Run: docker container inspect calico-324081 --format={{.State.Running}}
	I1213 10:17:03.999096  318132 cli_runner.go:164] Run: docker container inspect calico-324081 --format={{.State.Status}}
	I1213 10:17:04.028692  318132 cli_runner.go:164] Run: docker exec calico-324081 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:17:04.107046  318132 oci.go:144] the created container "calico-324081" has a running status.
	I1213 10:17:04.107089  318132 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/calico-324081/id_rsa...
	I1213 10:17:04.559051  318132 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-2315/.minikube/machines/calico-324081/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:17:04.593431  318132 cli_runner.go:164] Run: docker container inspect calico-324081 --format={{.State.Status}}
	I1213 10:17:04.615849  318132 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:17:04.615885  318132 kic_runner.go:114] Args: [docker exec --privileged calico-324081 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:17:04.662195  318132 cli_runner.go:164] Run: docker container inspect calico-324081 --format={{.State.Status}}
	I1213 10:17:04.683786  318132 machine.go:94] provisionDockerMachine start ...
	I1213 10:17:04.683904  318132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-324081
	I1213 10:17:04.704598  318132 main.go:143] libmachine: Using SSH client type: native
	I1213 10:17:04.704990  318132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1213 10:17:04.705012  318132 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:17:04.705819  318132 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52746->127.0.0.1:33118: read: connection reset by peer
	I1213 10:17:07.859688  318132 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-324081
	
	I1213 10:17:07.859730  318132 ubuntu.go:182] provisioning hostname "calico-324081"
	I1213 10:17:07.859848  318132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-324081
	I1213 10:17:07.885978  318132 main.go:143] libmachine: Using SSH client type: native
	I1213 10:17:07.886362  318132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1213 10:17:07.886382  318132 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-324081 && echo "calico-324081" | sudo tee /etc/hostname
	I1213 10:17:08.050518  318132 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-324081
	
	I1213 10:17:08.050637  318132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-324081
	I1213 10:17:08.075684  318132 main.go:143] libmachine: Using SSH client type: native
	I1213 10:17:08.076023  318132 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1213 10:17:08.076039  318132 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-324081' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-324081/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-324081' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:17:08.235864  318132 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:17:08.235906  318132 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 10:17:08.235932  318132 ubuntu.go:190] setting up certificates
	I1213 10:17:08.235950  318132 provision.go:84] configureAuth start
	I1213 10:17:08.236021  318132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-324081
	I1213 10:17:08.254223  318132 provision.go:143] copyHostCerts
	I1213 10:17:08.254305  318132 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 10:17:08.254327  318132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 10:17:08.254417  318132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 10:17:08.254549  318132 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 10:17:08.254562  318132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 10:17:08.254590  318132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 10:17:08.254651  318132 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 10:17:08.254662  318132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 10:17:08.254687  318132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 10:17:08.254740  318132 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.calico-324081 san=[127.0.0.1 192.168.85.2 calico-324081 localhost minikube]
	I1213 10:17:08.589852  318132 provision.go:177] copyRemoteCerts
	I1213 10:17:08.589924  318132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:17:08.589974  318132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-324081
	I1213 10:17:08.607337  318132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/calico-324081/id_rsa Username:docker}
	I1213 10:17:08.711174  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:17:08.728600  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 10:17:08.746264  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:17:08.763404  318132 provision.go:87] duration metric: took 527.437239ms to configureAuth
	I1213 10:17:08.763473  318132 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:17:08.763706  318132 config.go:182] Loaded profile config "calico-324081": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 10:17:08.763737  318132 machine.go:97] duration metric: took 4.07992925s to provisionDockerMachine
	I1213 10:17:08.763757  318132 client.go:176] duration metric: took 10.326381358s to LocalClient.Create
	I1213 10:17:08.763786  318132 start.go:167] duration metric: took 10.32646034s to libmachine.API.Create "calico-324081"
	I1213 10:17:08.763803  318132 start.go:293] postStartSetup for "calico-324081" (driver="docker")
	I1213 10:17:08.763812  318132 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:17:08.763868  318132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:17:08.763911  318132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-324081
	I1213 10:17:08.785443  318132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/calico-324081/id_rsa Username:docker}
	I1213 10:17:08.895493  318132 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:17:08.898663  318132 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:17:08.898690  318132 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:17:08.898707  318132 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 10:17:08.898763  318132 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 10:17:08.898841  318132 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 10:17:08.898947  318132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:17:08.906282  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:17:08.927444  318132 start.go:296] duration metric: took 163.620353ms for postStartSetup
	I1213 10:17:08.927839  318132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-324081
	I1213 10:17:08.944786  318132 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/config.json ...
	I1213 10:17:08.945062  318132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:17:08.945117  318132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-324081
	I1213 10:17:08.962932  318132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/calico-324081/id_rsa Username:docker}
	I1213 10:17:09.068965  318132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:17:09.073873  318132 start.go:128] duration metric: took 10.640959287s to createHost
	I1213 10:17:09.073895  318132 start.go:83] releasing machines lock for "calico-324081", held for 10.641090242s
	I1213 10:17:09.074001  318132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-324081
	I1213 10:17:09.094265  318132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:17:09.094296  318132 ssh_runner.go:195] Run: cat /version.json
	I1213 10:17:09.094336  318132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-324081
	I1213 10:17:09.094352  318132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-324081
	I1213 10:17:09.120072  318132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/calico-324081/id_rsa Username:docker}
	I1213 10:17:09.121542  318132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/calico-324081/id_rsa Username:docker}
	I1213 10:17:09.318922  318132 ssh_runner.go:195] Run: systemctl --version
	I1213 10:17:09.325703  318132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:17:09.329994  318132 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:17:09.330065  318132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:17:09.357291  318132 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 10:17:09.357317  318132 start.go:496] detecting cgroup driver to use...
	I1213 10:17:09.357347  318132 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:17:09.357396  318132 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:17:09.372671  318132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:17:09.386141  318132 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:17:09.386224  318132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:17:09.403846  318132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:17:09.422542  318132 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:17:09.546434  318132 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:17:09.672515  318132 docker.go:234] disabling docker service ...
	I1213 10:17:09.672660  318132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:17:09.695761  318132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:17:09.709052  318132 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:17:09.824173  318132 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:17:09.946407  318132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:17:09.959127  318132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:17:09.972377  318132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:17:09.981027  318132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:17:09.989874  318132 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:17:09.990015  318132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:17:09.998536  318132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:17:10.009058  318132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:17:10.021224  318132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:17:10.031317  318132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:17:10.040607  318132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:17:10.050170  318132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:17:10.059049  318132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:17:10.069027  318132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:17:10.076858  318132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:17:10.084592  318132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:17:10.195719  318132 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:17:10.337164  318132 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:17:10.337243  318132 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:17:10.342356  318132 start.go:564] Will wait 60s for crictl version
	I1213 10:17:10.342454  318132 ssh_runner.go:195] Run: which crictl
	I1213 10:17:10.345937  318132 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:17:10.376166  318132 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:17:10.376243  318132 ssh_runner.go:195] Run: containerd --version
	I1213 10:17:10.396145  318132 ssh_runner.go:195] Run: containerd --version
	I1213 10:17:10.420710  318132 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1213 10:17:10.423652  318132 cli_runner.go:164] Run: docker network inspect calico-324081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:17:10.439604  318132 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 10:17:10.443417  318132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:17:10.452849  318132 kubeadm.go:884] updating cluster {Name:calico-324081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-324081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:17:10.452969  318132 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 10:17:10.453033  318132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:17:10.476957  318132 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:17:10.476983  318132 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:17:10.477040  318132 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:17:10.501041  318132 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:17:10.501068  318132 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:17:10.501076  318132 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1213 10:17:10.501170  318132 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-324081 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-324081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1213 10:17:10.501239  318132 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:17:10.525623  318132 cni.go:84] Creating CNI manager for "calico"
	I1213 10:17:10.525661  318132 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:17:10.525684  318132 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-324081 NodeName:calico-324081 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:17:10.525812  318132 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-324081"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:17:10.525879  318132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:17:10.533633  318132 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:17:10.533703  318132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:17:10.541337  318132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1213 10:17:10.553510  318132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:17:10.566084  318132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1213 10:17:10.578304  318132 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:17:10.581610  318132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:17:10.590621  318132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:17:10.702472  318132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:17:10.720721  318132 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081 for IP: 192.168.85.2
	I1213 10:17:10.720793  318132 certs.go:195] generating shared ca certs ...
	I1213 10:17:10.720828  318132 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:17:10.720981  318132 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 10:17:10.721059  318132 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 10:17:10.721084  318132 certs.go:257] generating profile certs ...
	I1213 10:17:10.721164  318132 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.key
	I1213 10:17:10.721204  318132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt with IP's: []
	I1213 10:17:10.944278  318132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt ...
	I1213 10:17:10.944313  318132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt: {Name:mke8a607b432293fe602421a496227c5dfab6a12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:17:10.944534  318132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.key ...
	I1213 10:17:10.944551  318132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.key: {Name:mkfe905c3f75def8f4e4be9f3f58f2a63d158b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:17:10.944650  318132 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/apiserver.key.9b0c22b7
	I1213 10:17:10.944673  318132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/apiserver.crt.9b0c22b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 10:17:11.204924  318132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/apiserver.crt.9b0c22b7 ...
	I1213 10:17:11.204956  318132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/apiserver.crt.9b0c22b7: {Name:mk68a671e2d157be6e00c838427bcea68a6a343f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:17:11.205151  318132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/apiserver.key.9b0c22b7 ...
	I1213 10:17:11.205167  318132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/apiserver.key.9b0c22b7: {Name:mk24f019926ad356f523c39d28b54fd15932d998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:17:11.205254  318132 certs.go:382] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/apiserver.crt.9b0c22b7 -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/apiserver.crt
	I1213 10:17:11.205334  318132 certs.go:386] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/apiserver.key.9b0c22b7 -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/apiserver.key
	I1213 10:17:11.205396  318132 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/proxy-client.key
	I1213 10:17:11.205414  318132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/proxy-client.crt with IP's: []
	I1213 10:17:11.274496  318132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/proxy-client.crt ...
	I1213 10:17:11.274524  318132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/proxy-client.crt: {Name:mk8dfd9147c9ef1a8c6b3a1c2fad919637b5074b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:17:11.274695  318132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/proxy-client.key ...
	I1213 10:17:11.274707  318132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/proxy-client.key: {Name:mkd9360b5c763ca835834461407c123668437a35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:17:11.274891  318132 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 10:17:11.274936  318132 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 10:17:11.274949  318132 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:17:11.274981  318132 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:17:11.275009  318132 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:17:11.275035  318132 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 10:17:11.275085  318132 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:17:11.275671  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:17:11.292782  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:17:11.310540  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:17:11.328050  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:17:11.344608  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:17:11.361882  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:17:11.379642  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:17:11.396571  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:17:11.413790  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 10:17:11.430992  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 10:17:11.447979  318132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:17:11.464906  318132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:17:11.477318  318132 ssh_runner.go:195] Run: openssl version
	I1213 10:17:11.483363  318132 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 10:17:11.490698  318132 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 10:17:11.498182  318132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 10:17:11.501647  318132 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 10:17:11.501754  318132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 10:17:11.543233  318132 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:17:11.550804  318132 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:17:11.558030  318132 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:17:11.565144  318132 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:17:11.572833  318132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:17:11.576974  318132 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:17:11.577093  318132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:17:11.618231  318132 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:17:11.625628  318132 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:17:11.632902  318132 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 10:17:11.640228  318132 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 10:17:11.647643  318132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 10:17:11.651475  318132 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 10:17:11.651563  318132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 10:17:11.692702  318132 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:17:11.700157  318132 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
	I1213 10:17:11.707226  318132 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:17:11.710681  318132 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:17:11.710786  318132 kubeadm.go:401] StartCluster: {Name:calico-324081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-324081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:17:11.710883  318132 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:17:11.710948  318132 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:17:11.736381  318132 cri.go:89] found id: ""
	I1213 10:17:11.736448  318132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:17:11.743924  318132 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:17:11.751326  318132 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:17:11.751390  318132 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:17:11.758910  318132 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:17:11.758933  318132 kubeadm.go:158] found existing configuration files:
	
	I1213 10:17:11.759011  318132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:17:11.766251  318132 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:17:11.766318  318132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:17:11.780037  318132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:17:11.790852  318132 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:17:11.790920  318132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:17:11.799415  318132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:17:11.808045  318132 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:17:11.808109  318132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:17:11.816434  318132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:17:11.824951  318132 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:17:11.825016  318132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:17:11.833005  318132 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:17:11.872984  318132 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:17:11.873044  318132 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:17:11.894628  318132 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:17:11.894740  318132 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:17:11.894792  318132 kubeadm.go:319] OS: Linux
	I1213 10:17:11.894864  318132 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:17:11.894934  318132 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:17:11.895002  318132 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:17:11.895066  318132 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:17:11.895140  318132 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:17:11.895216  318132 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:17:11.895290  318132 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:17:11.895357  318132 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:17:11.895426  318132 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:17:11.965175  318132 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:17:11.965300  318132 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:17:11.965396  318132 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:17:11.970790  318132 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:17:11.977168  318132 out.go:252]   - Generating certificates and keys ...
	I1213 10:17:11.977278  318132 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:17:11.977366  318132 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:17:12.819576  318132 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:17:12.957765  318132 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:17:13.227359  318132 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:17:13.448411  318132 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:17:13.658594  318132 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:17:13.658983  318132 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-324081 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:17:14.137240  318132 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:17:14.137627  318132 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-324081 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 10:17:14.746852  318132 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:17:15.499900  318132 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:17:16.007762  318132 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:17:16.008112  318132 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:17:17.139852  318132 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:17:17.968077  318132 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:17:19.660309  318132 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:17:20.073559  318132 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:17:21.178674  318132 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:17:21.179286  318132 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:17:21.181854  318132 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953597846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953660485Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953767875Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953849370Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953910048Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953970676Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954065652Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954126674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954193457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954302308Z" level=info msg="Connect containerd service"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954668147Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.955354550Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.966201405Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.966268516Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.966298096Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.966337842Z" level=info msg="Start recovering state"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985180374Z" level=info msg="Start event monitor"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985226668Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985236646Z" level=info msg="Start streaming server"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985245721Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985254336Z" level=info msg="runtime interface starting up..."
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985260564Z" level=info msg="starting plugins..."
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985290447Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:02:17 no-preload-328069 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.987150021Z" level=info msg="containerd successfully booted in 0.060163s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:17:23.590165    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:17:23.590629    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:17:23.592009    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:17:23.592272    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:17:23.593681    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 10:17:23 up  1:59,  0 user,  load average: 1.59, 1.21, 1.20
	Linux no-preload-328069 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:17:20 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:17:20 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1202.
	Dec 13 10:17:20 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:17:20 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:17:20 no-preload-328069 kubelet[7955]: E1213 10:17:20.825257    7955 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:17:20 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:17:20 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:17:21 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 13 10:17:21 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:17:21 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:17:21 no-preload-328069 kubelet[7961]: E1213 10:17:21.580163    7961 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:17:21 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:17:21 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:17:22 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 13 10:17:22 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:17:22 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:17:22 no-preload-328069 kubelet[7980]: E1213 10:17:22.301955    7980 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:17:22 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:17:22 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:17:23 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 13 10:17:23 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:17:23 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:17:23 no-preload-328069 kubelet[8015]: E1213 10:17:23.125201    8015 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:17:23 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:17:23 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069: exit status 2 (517.045114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-328069" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-987495 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495: exit status 2 (327.605492ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-987495 -n newest-cni-987495
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-987495 -n newest-cni-987495: exit status 2 (312.713209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-987495 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495: exit status 2 (327.854471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-987495 -n newest-cni-987495
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-987495 -n newest-cni-987495: exit status 2 (311.545784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-987495
helpers_test.go:244: (dbg) docker inspect newest-cni-987495:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac",
	        "Created": "2025-12-13T09:56:44.68064601Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285966,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:06:44.630226292Z",
	            "FinishedAt": "2025-12-13T10:06:43.28882954Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/hosts",
	        "LogPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac-json.log",
	        "Name": "/newest-cni-987495",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-987495:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-987495",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac",
	                "LowerDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-987495",
	                "Source": "/var/lib/docker/volumes/newest-cni-987495/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-987495",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-987495",
	                "name.minikube.sigs.k8s.io": "newest-cni-987495",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5075c185fe763e8b4bf25c5fa6e0906d897dd0a6aa9fa09a4f6785fde91f40b",
	            "SandboxKey": "/var/run/docker/netns/d5075c185fe7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-987495": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:1b:64:66:e5:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1cc05b29a6a537694a06e8a33e1431f6867104db51c8eb4299d9f9f07c01c4",
	                    "EndpointID": "e82ad5225efe9fbd3a246c4b71f89967b2a2d9edc684052e26b72ce55599a589",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-987495",
	                        "5d45a23b08cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-987495 -n newest-cni-987495
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-987495 -n newest-cni-987495: exit status 2 (513.845443ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-987495 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-987495 logs -n 25: (1.624550535s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-544967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ stop    │ -p default-k8s-diff-port-544967 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-544967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-544967 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-328069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:00 UTC │                     │
	│ stop    │ -p no-preload-328069 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │ 13 Dec 25 10:02 UTC │
	│ addons  │ enable dashboard -p no-preload-328069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │ 13 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-987495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:05 UTC │                     │
	│ stop    │ -p newest-cni-987495 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:06 UTC │ 13 Dec 25 10:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-987495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:06 UTC │ 13 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:06 UTC │                     │
	│ image   │ newest-cni-987495 image list --format=json                                                                                                                                                                                                                 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │ 13 Dec 25 10:12 UTC │
	│ pause   │ -p newest-cni-987495 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │ 13 Dec 25 10:12 UTC │
	│ unpause │ -p newest-cni-987495 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │ 13 Dec 25 10:12 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:06:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:06:44.358606  285837 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:06:44.358774  285837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:06:44.358804  285837 out.go:374] Setting ErrFile to fd 2...
	I1213 10:06:44.358810  285837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:06:44.359110  285837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 10:06:44.359584  285837 out.go:368] Setting JSON to false
	I1213 10:06:44.360505  285837 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6557,"bootTime":1765613848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:06:44.360574  285837 start.go:143] virtualization:  
	I1213 10:06:44.365480  285837 out.go:179] * [newest-cni-987495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:06:44.368718  285837 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:06:44.368777  285837 notify.go:221] Checking for updates...
	I1213 10:06:44.374649  285837 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:06:44.377632  285837 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:44.380625  285837 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 10:06:44.383607  285837 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:06:44.386498  285837 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:06:44.389949  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:44.390563  285837 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:06:44.426169  285837 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:06:44.426412  285837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:06:44.479541  285837 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:06:44.469338758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:06:44.479654  285837 docker.go:319] overlay module found
	I1213 10:06:44.482815  285837 out.go:179] * Using the docker driver based on existing profile
	I1213 10:06:44.485692  285837 start.go:309] selected driver: docker
	I1213 10:06:44.485711  285837 start.go:927] validating driver "docker" against &{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:44.485823  285837 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:06:44.486552  285837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:06:44.545256  285837 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:06:44.535101087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:06:44.545615  285837 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 10:06:44.545650  285837 cni.go:84] Creating CNI manager for ""
	I1213 10:06:44.545706  285837 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:06:44.545747  285837 start.go:353] cluster config:
	{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:44.548958  285837 out.go:179] * Starting "newest-cni-987495" primary control-plane node in "newest-cni-987495" cluster
	I1213 10:06:44.551733  285837 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:06:44.554789  285837 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:06:44.557547  285837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:06:44.557592  285837 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:06:44.557602  285837 cache.go:65] Caching tarball of preloaded images
	I1213 10:06:44.557636  285837 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:06:44.557693  285837 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:06:44.557703  285837 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:06:44.557824  285837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 10:06:44.577619  285837 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:06:44.577644  285837 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:06:44.577660  285837 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:06:44.577696  285837 start.go:360] acquireMachinesLock for newest-cni-987495: {Name:mk0b05e51288e33ec02181c33b2cba54230603c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:06:44.577756  285837 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "newest-cni-987495"
	I1213 10:06:44.577778  285837 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:06:44.577787  285837 fix.go:54] fixHost starting: 
	I1213 10:06:44.578057  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:44.595484  285837 fix.go:112] recreateIfNeeded on newest-cni-987495: state=Stopped err=<nil>
	W1213 10:06:44.595545  285837 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 10:06:43.023116  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:45.025351  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:44.598729  285837 out.go:252] * Restarting existing docker container for "newest-cni-987495" ...
	I1213 10:06:44.598811  285837 cli_runner.go:164] Run: docker start newest-cni-987495
	I1213 10:06:44.855461  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:44.880412  285837 kic.go:430] container "newest-cni-987495" state is running.
	I1213 10:06:44.880797  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:44.909497  285837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 10:06:44.909726  285837 machine.go:94] provisionDockerMachine start ...
	I1213 10:06:44.909783  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:44.930622  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:44.931232  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:44.931291  285837 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:06:44.932041  285837 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:06:48.091507  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 10:06:48.091560  285837 ubuntu.go:182] provisioning hostname "newest-cni-987495"
	I1213 10:06:48.091625  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.110757  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:48.111074  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:48.111090  285837 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-987495 && echo "newest-cni-987495" | sudo tee /etc/hostname
	I1213 10:06:48.273955  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 10:06:48.274083  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.291615  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:48.291933  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:48.291961  285837 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-987495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-987495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-987495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:06:48.443806  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:06:48.443836  285837 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 10:06:48.443909  285837 ubuntu.go:190] setting up certificates
	I1213 10:06:48.443925  285837 provision.go:84] configureAuth start
	I1213 10:06:48.444014  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:48.461447  285837 provision.go:143] copyHostCerts
	I1213 10:06:48.461529  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 10:06:48.461544  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 10:06:48.461626  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 10:06:48.461731  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 10:06:48.461744  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 10:06:48.461773  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 10:06:48.461831  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 10:06:48.461840  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 10:06:48.461873  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 10:06:48.461929  285837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.newest-cni-987495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-987495]
	I1213 10:06:48.588588  285837 provision.go:177] copyRemoteCerts
	I1213 10:06:48.588677  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:06:48.588742  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.606370  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:48.711093  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:06:48.728291  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:06:48.746238  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:06:48.763841  285837 provision.go:87] duration metric: took 319.890818ms to configureAuth
	I1213 10:06:48.763919  285837 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:06:48.764158  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:48.764172  285837 machine.go:97] duration metric: took 3.854438499s to provisionDockerMachine
	I1213 10:06:48.764181  285837 start.go:293] postStartSetup for "newest-cni-987495" (driver="docker")
	I1213 10:06:48.764199  285837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:06:48.764250  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:06:48.764297  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.781656  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:48.887571  285837 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:06:48.891032  285837 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:06:48.891062  285837 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:06:48.891074  285837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 10:06:48.891128  285837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 10:06:48.891231  285837 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 10:06:48.891336  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:06:48.898692  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:06:48.916401  285837 start.go:296] duration metric: took 152.205033ms for postStartSetup
	I1213 10:06:48.916505  285837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:06:48.916556  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.933960  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.036570  285837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:06:49.041484  285837 fix.go:56] duration metric: took 4.463690867s for fixHost
	I1213 10:06:49.041511  285837 start.go:83] releasing machines lock for "newest-cni-987495", held for 4.463742733s
	I1213 10:06:49.041581  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:49.058404  285837 ssh_runner.go:195] Run: cat /version.json
	I1213 10:06:49.058462  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:49.058542  285837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:06:49.058607  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:49.080342  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.081196  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.272327  285837 ssh_runner.go:195] Run: systemctl --version
	I1213 10:06:49.280206  285837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:06:49.285584  285837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:06:49.285649  285837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:06:49.294944  285837 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:06:49.295018  285837 start.go:496] detecting cgroup driver to use...
	I1213 10:06:49.295073  285837 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:06:49.295155  285837 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:06:49.313555  285837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:06:49.330142  285837 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:06:49.330250  285837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:06:49.347394  285837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:06:49.361017  285837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:06:49.470304  285837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:06:49.578011  285837 docker.go:234] disabling docker service ...
	I1213 10:06:49.578102  285837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:06:49.592856  285837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:06:49.605575  285837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:06:49.713643  285837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:06:49.824293  285837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:06:49.838298  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:06:49.852989  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:06:49.861909  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:06:49.870661  285837 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:06:49.870784  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:06:49.879670  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:06:49.888429  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:06:49.896909  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:06:49.905618  285837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:06:49.913163  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:06:49.921632  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:06:49.930294  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:06:49.939291  285837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:06:49.947067  285837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:06:49.954313  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:50.072981  285837 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:06:50.196904  285837 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:06:50.196994  285837 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:06:50.200903  285837 start.go:564] Will wait 60s for crictl version
	I1213 10:06:50.201048  285837 ssh_runner.go:195] Run: which crictl
	I1213 10:06:50.204672  285837 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:06:50.230484  285837 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:06:50.230603  285837 ssh_runner.go:195] Run: containerd --version
	I1213 10:06:50.250716  285837 ssh_runner.go:195] Run: containerd --version
	I1213 10:06:50.275578  285837 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:06:50.278424  285837 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:06:50.294657  285837 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 10:06:50.298351  285837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:06:50.310828  285837 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 10:06:50.313572  285837 kubeadm.go:884] updating cluster {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:06:50.313727  285837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:06:50.313810  285837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:06:50.342567  285837 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:06:50.342593  285837 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:06:50.342654  285837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:06:50.371166  285837 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:06:50.371189  285837 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:06:50.371197  285837 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 10:06:50.371299  285837 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-987495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:06:50.371378  285837 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:06:50.396100  285837 cni.go:84] Creating CNI manager for ""
	I1213 10:06:50.396123  285837 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:06:50.396165  285837 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 10:06:50.396196  285837 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-987495 NodeName:newest-cni-987495 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:06:50.396373  285837 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-987495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:06:50.396459  285837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:06:50.404329  285837 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:06:50.404398  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:06:50.411842  285837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:06:50.424649  285837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:06:50.442140  285837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 10:06:50.455154  285837 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:06:50.459006  285837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:06:50.468675  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:50.580293  285837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:06:50.596864  285837 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495 for IP: 192.168.85.2
	I1213 10:06:50.596887  285837 certs.go:195] generating shared ca certs ...
	I1213 10:06:50.596905  285837 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:50.597091  285837 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 10:06:50.597205  285837 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 10:06:50.597223  285837 certs.go:257] generating profile certs ...
	I1213 10:06:50.597356  285837 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key
	I1213 10:06:50.597436  285837 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e
	I1213 10:06:50.597506  285837 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key
	I1213 10:06:50.597658  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 10:06:50.597722  285837 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 10:06:50.597739  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:06:50.597785  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:06:50.597830  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:06:50.597864  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 10:06:50.597929  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:06:50.598639  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:06:50.618438  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:06:50.636641  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:06:50.654754  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:06:50.674470  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:06:50.692387  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:06:50.709515  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:06:50.726691  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:06:50.744316  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 10:06:50.762153  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:06:50.779459  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 10:06:50.799850  285837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:06:50.814739  285837 ssh_runner.go:195] Run: openssl version
	I1213 10:06:50.821667  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.831484  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 10:06:50.840240  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.844034  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.844100  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.885521  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:06:50.892992  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.900259  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:06:50.907747  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.911335  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.911425  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.952315  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:06:50.959952  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.967099  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 10:06:50.974300  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.977776  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.977836  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 10:06:51.019185  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:06:51.026990  285837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:06:51.031010  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:06:51.084662  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:06:51.132673  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:06:51.177864  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:06:51.221006  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:06:51.268266  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:06:51.309760  285837 kubeadm.go:401] StartCluster: {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:51.309854  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:06:51.309920  285837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:06:51.336480  285837 cri.go:89] found id: ""
	I1213 10:06:51.336643  285837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:06:51.344873  285837 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:06:51.344892  285837 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:06:51.344971  285837 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:06:51.352443  285837 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:06:51.353090  285837 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-987495" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:51.353376  285837 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-987495" cluster setting kubeconfig missing "newest-cni-987495" context setting]
	I1213 10:06:51.353816  285837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.355217  285837 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:06:51.362937  285837 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 10:06:51.363006  285837 kubeadm.go:602] duration metric: took 18.107502ms to restartPrimaryControlPlane
	I1213 10:06:51.363022  285837 kubeadm.go:403] duration metric: took 53.271819ms to StartCluster
	I1213 10:06:51.363041  285837 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.363105  285837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:51.363987  285837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.364220  285837 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:06:51.364499  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:51.364635  285837 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:06:51.364717  285837 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-987495"
	I1213 10:06:51.364742  285837 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-987495"
	I1213 10:06:51.364767  285837 addons.go:70] Setting default-storageclass=true in profile "newest-cni-987495"
	I1213 10:06:51.364819  285837 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-987495"
	I1213 10:06:51.364774  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.365187  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.365396  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.364741  285837 addons.go:70] Setting dashboard=true in profile "newest-cni-987495"
	I1213 10:06:51.365978  285837 addons.go:239] Setting addon dashboard=true in "newest-cni-987495"
	W1213 10:06:51.365987  285837 addons.go:248] addon dashboard should already be in state true
	I1213 10:06:51.366008  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.366429  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.370287  285837 out.go:179] * Verifying Kubernetes components...
	I1213 10:06:51.373474  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:51.400526  285837 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 10:06:51.404501  285837 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 10:06:51.407418  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 10:06:51.407443  285837 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 10:06:51.407622  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.417800  285837 addons.go:239] Setting addon default-storageclass=true in "newest-cni-987495"
	I1213 10:06:51.417844  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.418251  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.419100  285837 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1213 10:06:47.522700  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:49.522769  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:51.523631  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:51.423855  285837 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:51.423880  285837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:06:51.423942  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.466299  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.483641  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.486041  285837 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:51.486059  285837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:06:51.486115  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.509387  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.646942  285837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:06:51.680839  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 10:06:51.680862  285837 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 10:06:51.697914  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 10:06:51.697938  285837 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 10:06:51.704518  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:51.713551  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:51.723021  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 10:06:51.723048  285837 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 10:06:51.778125  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 10:06:51.778149  285837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 10:06:51.806697  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 10:06:51.806719  285837 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 10:06:51.819170  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 10:06:51.819253  285837 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 10:06:51.832331  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 10:06:51.832355  285837 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 10:06:51.845336  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 10:06:51.845362  285837 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 10:06:51.859132  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:06:51.859155  285837 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 10:06:51.872954  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:06:52.275964  285837 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:06:52.276037  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:52.276137  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276165  285837 retry.go:31] will retry after 226.70351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.276226  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276237  285837 retry.go:31] will retry after 265.695109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.276427  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276440  285837 retry.go:31] will retry after 287.765057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.503091  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:52.542820  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:52.565377  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:52.583674  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.583713  285837 retry.go:31] will retry after 384.757306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.624746  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.624777  285837 retry.go:31] will retry after 404.862658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.656044  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.656099  285837 retry.go:31] will retry after 520.967054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.776249  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:52.969189  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:53.030822  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:53.051878  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.051909  285837 retry.go:31] will retry after 644.635232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:53.146104  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.146138  285837 retry.go:31] will retry after 713.617137ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.177278  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:53.244074  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.244105  285837 retry.go:31] will retry after 478.208285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.276451  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:53.697474  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:53.722935  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:53.763188  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.763282  285837 retry.go:31] will retry after 791.669242ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.776509  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:53.833584  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.833619  285837 retry.go:31] will retry after 1.106769375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.860665  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:53.922352  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.922382  285837 retry.go:31] will retry after 439.211444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.277094  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:54.023458  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:56.023636  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:54.362407  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:54.425741  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.425772  285837 retry.go:31] will retry after 994.413015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.555979  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:06:54.643378  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.643410  285837 retry.go:31] will retry after 1.597794919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.776687  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:54.941378  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:55.010057  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.010106  285837 retry.go:31] will retry after 1.576792043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.276187  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:55.420648  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:55.480113  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.480142  285837 retry.go:31] will retry after 2.26666641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.776309  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:56.242125  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:56.276562  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:56.308877  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.308912  285837 retry.go:31] will retry after 2.70852063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.587192  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:56.650840  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.650869  285837 retry.go:31] will retry after 1.746680045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.776898  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:57.276239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:57.747110  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:57.776721  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:57.808824  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:57.808896  285837 retry.go:31] will retry after 3.338979851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.276183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:58.397695  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:58.460604  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.460637  285837 retry.go:31] will retry after 1.622921048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.776104  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:59.018609  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:06:59.122924  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:59.122951  285837 retry.go:31] will retry after 3.647698418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:59.276167  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:58.523051  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:01.022919  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:59.776456  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:00.084206  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:00.276658  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:00.330895  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:00.330933  285837 retry.go:31] will retry after 4.848981129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:00.776778  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:01.148539  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:01.211860  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:01.211894  285837 retry.go:31] will retry after 4.161832977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:01.277039  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:01.776560  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:02.276839  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:02.771686  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:07:02.776972  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:02.901393  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:02.901424  285837 retry.go:31] will retry after 5.549971544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:03.276936  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:03.776830  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:04.276724  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:03.522677  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:05.522772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:04.777224  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:05.180067  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:07:05.247404  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.247439  285837 retry.go:31] will retry after 4.476695877s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.276547  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:05.374229  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:05.433759  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.433787  285837 retry.go:31] will retry after 4.37892264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.776166  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:06.276368  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:06.776601  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:07.276152  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:07.777077  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:08.277179  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:08.451866  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:08.512981  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:08.513027  285837 retry.go:31] will retry after 9.372893328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:08.776155  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:09.276770  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:08.022655  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:10.022768  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:09.724392  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:09.776822  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:09.785453  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.785488  285837 retry.go:31] will retry after 5.955337388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.813514  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:09.876563  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.876594  285837 retry.go:31] will retry after 6.585328869s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:10.276122  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:10.776152  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:11.276997  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:11.776748  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:12.276867  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:12.777071  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:13.276725  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:13.776915  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:14.276832  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:12.022989  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:14.522670  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:14.777034  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:15.277144  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:15.741108  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:15.776723  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:15.809076  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:15.809111  285837 retry.go:31] will retry after 8.411412429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.276706  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:16.462334  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:16.524133  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.524164  285837 retry.go:31] will retry after 16.275248342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.776613  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.276278  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.776240  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.886523  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:17.954531  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:17.954562  285837 retry.go:31] will retry after 10.907278655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:18.276175  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:18.776243  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:19.276722  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:17.022862  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:19.522763  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:21.522806  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:19.776239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:20.276570  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:20.776244  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:21.277087  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:21.776477  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:22.276171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:22.777167  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:23.276540  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:23.776720  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:24.220799  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:24.276447  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:24.283800  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:24.283834  285837 retry.go:31] will retry after 19.949258949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:07:24.022701  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:26.023564  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:24.776758  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:25.276211  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:25.776711  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:26.276227  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:26.776716  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:27.276229  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:27.776183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.276941  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.776226  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.862833  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:28.922616  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:28.922648  285837 retry.go:31] will retry after 8.454738907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:29.277083  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:28.522731  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:30.522938  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:29.776182  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:30.277060  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:30.776835  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:31.276746  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:31.776414  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.276209  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.776715  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.799816  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:32.901801  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:32.901845  285837 retry.go:31] will retry after 14.65260505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:33.276216  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:33.776222  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:34.276756  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:33.022800  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:35.522770  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:34.776764  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:35.277073  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:35.776211  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:36.276331  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:36.776510  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:37.276183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:37.378406  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:37.440661  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:37.440691  285837 retry.go:31] will retry after 16.048870296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:37.776113  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:38.276917  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:38.776758  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:39.276296  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:38.022809  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:40.522836  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:39.776735  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:40.276749  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:40.777116  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:41.277172  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:41.776857  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:42.277141  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:42.776207  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:43.276171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:43.776690  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:44.233363  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:44.276911  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:44.294603  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:44.294641  285837 retry.go:31] will retry after 45.098120748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:07:42.523034  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:45.022823  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:44.776742  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:45.276466  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:45.776133  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:46.280870  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:46.776232  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:47.276987  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:47.554729  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:47.616803  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:47.616837  285837 retry.go:31] will retry after 38.754607023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:47.776168  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:48.276203  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:48.776412  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:49.276189  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:47.022949  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:49.522878  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:49.776177  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:50.277157  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:50.776201  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:51.276146  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:51.776144  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:51.776242  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:51.804204  285837 cri.go:89] found id: ""
	I1213 10:07:51.804236  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.804246  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:51.804253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:51.804314  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:51.829636  285837 cri.go:89] found id: ""
	I1213 10:07:51.829669  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.829679  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:51.829685  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:51.829745  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:51.857487  285837 cri.go:89] found id: ""
	I1213 10:07:51.857510  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.857519  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:51.857525  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:51.857590  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:51.881972  285837 cri.go:89] found id: ""
	I1213 10:07:51.881998  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.882006  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:51.882012  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:51.882072  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:51.906050  285837 cri.go:89] found id: ""
	I1213 10:07:51.906074  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.906083  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:51.906089  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:51.906149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:51.930678  285837 cri.go:89] found id: ""
	I1213 10:07:51.930700  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.930708  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:51.930715  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:51.930774  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:51.955590  285837 cri.go:89] found id: ""
	I1213 10:07:51.955661  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.955683  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:51.955701  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:51.955786  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:51.979349  285837 cri.go:89] found id: ""
	I1213 10:07:51.979374  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.979382  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:51.979391  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:51.979405  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:52.048255  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:52.039592    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.040290    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.041824    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.042312    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.043873    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:52.039592    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.040290    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.041824    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.042312    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.043873    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:52.048276  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:52.048290  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:52.074149  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:52.074187  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:07:52.103113  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:52.103142  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:52.161764  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:52.161797  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:53.489865  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:53.547700  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:53.547730  285837 retry.go:31] will retry after 48.398435893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:07:52.022714  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:54.023780  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:56.522671  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:54.676402  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:54.686866  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:54.686943  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:54.716493  285837 cri.go:89] found id: ""
	I1213 10:07:54.716514  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.716523  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:54.716529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:54.716584  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:54.740751  285837 cri.go:89] found id: ""
	I1213 10:07:54.740778  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.740787  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:54.740797  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:54.740854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:54.763680  285837 cri.go:89] found id: ""
	I1213 10:07:54.763703  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.763712  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:54.763717  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:54.763773  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:54.787504  285837 cri.go:89] found id: ""
	I1213 10:07:54.787556  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.787564  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:54.787570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:54.787626  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:54.812200  285837 cri.go:89] found id: ""
	I1213 10:07:54.812222  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.812231  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:54.812253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:54.812314  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:54.841586  285837 cri.go:89] found id: ""
	I1213 10:07:54.841613  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.841623  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:54.841629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:54.841687  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:54.865631  285837 cri.go:89] found id: ""
	I1213 10:07:54.865658  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.865667  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:54.865673  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:54.865731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:54.889746  285837 cri.go:89] found id: ""
	I1213 10:07:54.889773  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.889782  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:54.889792  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:54.889803  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:54.945120  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:54.945155  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:54.958121  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:54.958145  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:55.027564  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:55.017674    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.018407    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.020402    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.021114    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.022964    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:55.017674    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.018407    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.020402    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.021114    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.022964    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:55.027592  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:55.027605  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:55.053752  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:55.053788  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:07:57.584821  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:57.597676  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:57.597774  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:57.621661  285837 cri.go:89] found id: ""
	I1213 10:07:57.621684  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.621692  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:57.621699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:57.621756  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:57.649006  285837 cri.go:89] found id: ""
	I1213 10:07:57.649028  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.649036  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:57.649042  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:57.649107  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:57.672839  285837 cri.go:89] found id: ""
	I1213 10:07:57.672866  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.672875  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:57.672881  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:57.672937  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:57.697343  285837 cri.go:89] found id: ""
	I1213 10:07:57.697366  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.697375  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:57.697381  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:57.697447  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:57.722254  285837 cri.go:89] found id: ""
	I1213 10:07:57.722276  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.722284  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:57.722291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:57.722346  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:57.746125  285837 cri.go:89] found id: ""
	I1213 10:07:57.746150  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.746159  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:57.746165  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:57.746220  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:57.770612  285837 cri.go:89] found id: ""
	I1213 10:07:57.770679  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.770702  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:57.770720  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:57.770799  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:57.795253  285837 cri.go:89] found id: ""
	I1213 10:07:57.795277  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.795285  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:57.795294  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:57.795320  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:57.852923  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:57.852957  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:57.866320  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:57.866350  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:57.930573  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:57.921741    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.922259    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.923923    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.925244    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.926323    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:57.921741    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.922259    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.923923    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.925244    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.926323    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:57.930596  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:57.930609  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:57.955644  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:57.955687  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:07:58.522782  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:00.523382  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:00.485873  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:00.498933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:00.499039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:00.588348  285837 cri.go:89] found id: ""
	I1213 10:08:00.588373  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.588383  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:00.588403  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:00.588480  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:00.632508  285837 cri.go:89] found id: ""
	I1213 10:08:00.632581  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.632604  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:00.632623  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:00.632721  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:00.659204  285837 cri.go:89] found id: ""
	I1213 10:08:00.659231  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.659240  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:00.659246  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:00.659303  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:00.685440  285837 cri.go:89] found id: ""
	I1213 10:08:00.685468  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.685477  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:00.685492  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:00.685551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:00.710692  285837 cri.go:89] found id: ""
	I1213 10:08:00.710719  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.710728  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:00.710734  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:00.710791  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:00.736661  285837 cri.go:89] found id: ""
	I1213 10:08:00.736683  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.736692  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:00.736698  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:00.736766  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:00.761591  285837 cri.go:89] found id: ""
	I1213 10:08:00.761617  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.761627  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:00.761634  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:00.761695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:00.786438  285837 cri.go:89] found id: ""
	I1213 10:08:00.786465  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.786474  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:00.786484  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:00.786494  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:00.842291  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:00.842327  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:00.855993  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:00.856020  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:00.925840  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:00.916441    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918096    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918752    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920335    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920931    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:00.916441    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918096    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918752    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920335    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920931    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:00.925874  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:00.925888  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:00.953015  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:00.953064  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:03.486172  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:03.496591  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:03.496662  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:03.534940  285837 cri.go:89] found id: ""
	I1213 10:08:03.534964  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.534973  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:03.534979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:03.535038  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:03.598662  285837 cri.go:89] found id: ""
	I1213 10:08:03.598688  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.598698  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:03.598704  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:03.598766  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:03.624092  285837 cri.go:89] found id: ""
	I1213 10:08:03.624114  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.624122  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:03.624129  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:03.624188  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:03.649153  285837 cri.go:89] found id: ""
	I1213 10:08:03.649176  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.649185  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:03.649196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:03.649255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:03.673710  285837 cri.go:89] found id: ""
	I1213 10:08:03.673778  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.673802  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:03.673822  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:03.673901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:03.698952  285837 cri.go:89] found id: ""
	I1213 10:08:03.698978  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.699004  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:03.699011  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:03.699076  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:03.723499  285837 cri.go:89] found id: ""
	I1213 10:08:03.723548  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.723558  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:03.723563  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:03.723626  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:03.748795  285837 cri.go:89] found id: ""
	I1213 10:08:03.748819  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.748828  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:03.748837  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:03.748848  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:03.812342  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:03.803601    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.804143    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.805763    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.806297    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.807979    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:03.803601    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.804143    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.805763    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.806297    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.807979    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:03.812368  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:03.812388  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:03.841166  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:03.841206  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:03.871116  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:03.871146  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:03.927807  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:03.927839  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 10:08:03.022774  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:05.522704  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:06.441780  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:06.452228  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:06.452309  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:06.476347  285837 cri.go:89] found id: ""
	I1213 10:08:06.476370  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.476378  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:06.476384  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:06.476441  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:06.504937  285837 cri.go:89] found id: ""
	I1213 10:08:06.504961  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.504970  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:06.504977  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:06.505037  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:06.553519  285837 cri.go:89] found id: ""
	I1213 10:08:06.553545  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.553553  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:06.553559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:06.553619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:06.608223  285837 cri.go:89] found id: ""
	I1213 10:08:06.608249  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.608258  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:06.608264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:06.608322  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:06.639732  285837 cri.go:89] found id: ""
	I1213 10:08:06.639801  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.639816  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:06.639823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:06.639886  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:06.668074  285837 cri.go:89] found id: ""
	I1213 10:08:06.668099  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.668108  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:06.668114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:06.668190  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:06.691695  285837 cri.go:89] found id: ""
	I1213 10:08:06.691720  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.691729  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:06.691735  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:06.691801  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:06.717093  285837 cri.go:89] found id: ""
	I1213 10:08:06.717120  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.717129  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:06.717140  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:06.717152  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:06.773552  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:06.773584  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:06.787064  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:06.787090  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:06.854164  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:06.846017    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.846675    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848150    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848641    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.850183    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:06.846017    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.846675    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848150    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848641    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.850183    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:06.854189  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:06.854202  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:06.879668  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:06.879702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:08:08.022653  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:10.022701  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:09.406742  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:09.417411  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:09.417484  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:09.442113  285837 cri.go:89] found id: ""
	I1213 10:08:09.442138  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.442147  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:09.442153  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:09.442218  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:09.466316  285837 cri.go:89] found id: ""
	I1213 10:08:09.466342  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.466351  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:09.466357  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:09.466415  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:09.491678  285837 cri.go:89] found id: ""
	I1213 10:08:09.491703  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.491712  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:09.491718  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:09.491776  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:09.515316  285837 cri.go:89] found id: ""
	I1213 10:08:09.515337  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.515346  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:09.515352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:09.515410  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:09.567095  285837 cri.go:89] found id: ""
	I1213 10:08:09.567116  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.567125  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:09.567131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:09.567197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:09.616045  285837 cri.go:89] found id: ""
	I1213 10:08:09.616067  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.616076  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:09.616082  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:09.616142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:09.640449  285837 cri.go:89] found id: ""
	I1213 10:08:09.640479  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.640488  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:09.640495  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:09.640555  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:09.664888  285837 cri.go:89] found id: ""
	I1213 10:08:09.664912  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.664921  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:09.664930  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:09.664941  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:09.691077  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:09.691106  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:09.747246  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:09.747280  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:09.761112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:09.761140  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:09.830659  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:09.821800    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.822624    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.824372    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.825020    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.826720    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:09.821800    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.822624    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.824372    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.825020    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.826720    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:09.830682  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:09.830695  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:12.356184  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:12.368119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:12.368203  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:12.394250  285837 cri.go:89] found id: ""
	I1213 10:08:12.394279  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.394291  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:12.394298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:12.394365  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:12.419062  285837 cri.go:89] found id: ""
	I1213 10:08:12.419086  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.419095  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:12.419102  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:12.419159  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:12.446274  285837 cri.go:89] found id: ""
	I1213 10:08:12.446300  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.446308  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:12.446315  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:12.446371  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:12.469875  285837 cri.go:89] found id: ""
	I1213 10:08:12.469901  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.469910  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:12.469917  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:12.469977  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:12.495108  285837 cri.go:89] found id: ""
	I1213 10:08:12.495136  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.495145  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:12.495152  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:12.495207  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:12.521169  285837 cri.go:89] found id: ""
	I1213 10:08:12.521190  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.521198  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:12.521204  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:12.521258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:12.557387  285837 cri.go:89] found id: ""
	I1213 10:08:12.557412  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.557421  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:12.557427  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:12.557483  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:12.586888  285837 cri.go:89] found id: ""
	I1213 10:08:12.586913  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.586922  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:12.586931  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:12.586942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:12.654328  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:12.654361  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:12.668044  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:12.668071  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:12.737226  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:12.728433    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.729118    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.730862    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.731560    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.733263    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:12.728433    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.729118    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.730862    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.731560    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.733263    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:12.737248  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:12.737261  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:12.762749  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:12.762783  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:08:12.022956  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:14.522703  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:15.289142  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:15.301958  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:15.302029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:15.330317  285837 cri.go:89] found id: ""
	I1213 10:08:15.330344  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.330353  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:15.330359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:15.330423  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:15.358090  285837 cri.go:89] found id: ""
	I1213 10:08:15.358115  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.358124  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:15.358130  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:15.358187  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:15.382832  285837 cri.go:89] found id: ""
	I1213 10:08:15.382862  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.382871  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:15.382877  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:15.382940  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:15.409515  285837 cri.go:89] found id: ""
	I1213 10:08:15.409539  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.409549  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:15.409555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:15.409613  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:15.433885  285837 cri.go:89] found id: ""
	I1213 10:08:15.433911  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.433920  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:15.433926  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:15.433989  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:15.458618  285837 cri.go:89] found id: ""
	I1213 10:08:15.458643  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.458653  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:15.458659  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:15.458715  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:15.482592  285837 cri.go:89] found id: ""
	I1213 10:08:15.482616  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.482625  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:15.482635  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:15.482693  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:15.511125  285837 cri.go:89] found id: ""
	I1213 10:08:15.511153  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.511163  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:15.511172  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:15.511183  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:15.584797  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:15.584833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:15.598725  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:15.598752  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:15.681678  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:15.672277    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.673044    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.674516    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.675095    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.676657    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:15.672277    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.673044    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.674516    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.675095    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.676657    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:15.681701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:15.681714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:15.707610  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:15.707646  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:18.235184  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:18.246689  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:18.246762  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:18.271129  285837 cri.go:89] found id: ""
	I1213 10:08:18.271155  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.271165  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:18.271172  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:18.271240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:18.296110  285837 cri.go:89] found id: ""
	I1213 10:08:18.296135  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.296144  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:18.296150  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:18.296208  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:18.321267  285837 cri.go:89] found id: ""
	I1213 10:08:18.321290  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.321304  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:18.321311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:18.321368  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:18.349274  285837 cri.go:89] found id: ""
	I1213 10:08:18.349300  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.349309  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:18.349315  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:18.349414  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:18.373235  285837 cri.go:89] found id: ""
	I1213 10:08:18.373310  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.373325  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:18.373335  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:18.373395  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:18.397157  285837 cri.go:89] found id: ""
	I1213 10:08:18.397181  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.397190  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:18.397196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:18.397283  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:18.421144  285837 cri.go:89] found id: ""
	I1213 10:08:18.421168  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.421177  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:18.421184  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:18.421243  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:18.449567  285837 cri.go:89] found id: ""
	I1213 10:08:18.449643  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.449659  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:18.449670  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:18.449682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:18.505803  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:18.505836  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:18.520075  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:18.520099  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:18.640681  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:18.632737    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.633402    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.634626    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.635073    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.636570    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:18.632737    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.633402    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.634626    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.635073    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.636570    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:18.640706  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:18.640720  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:18.666166  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:18.666201  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:08:17.022884  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:19.522795  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:20.031934  279351 node_ready.go:38] duration metric: took 6m0.009733727s for node "no-preload-328069" to be "Ready" ...
	I1213 10:08:20.035146  279351 out.go:203] 
	W1213 10:08:20.038039  279351 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:08:20.038064  279351 out.go:285] * 
	W1213 10:08:20.040199  279351 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:08:20.043110  279351 out.go:203] 
	I1213 10:08:21.195745  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:21.206020  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:21.206084  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:21.246086  285837 cri.go:89] found id: ""
	I1213 10:08:21.246106  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.246115  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:21.246122  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:21.246181  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:21.273446  285837 cri.go:89] found id: ""
	I1213 10:08:21.273469  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.273477  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:21.273483  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:21.273543  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:21.312010  285837 cri.go:89] found id: ""
	I1213 10:08:21.312031  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.312040  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:21.312046  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:21.312104  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:21.357158  285837 cri.go:89] found id: ""
	I1213 10:08:21.357177  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.357185  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:21.357192  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:21.357248  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:21.398112  285837 cri.go:89] found id: ""
	I1213 10:08:21.398135  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.398143  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:21.398149  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:21.398205  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:21.447244  285837 cri.go:89] found id: ""
	I1213 10:08:21.447268  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.447276  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:21.447283  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:21.447347  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:21.495558  285837 cri.go:89] found id: ""
	I1213 10:08:21.495581  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.495589  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:21.495595  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:21.495652  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:21.555224  285837 cri.go:89] found id: ""
	I1213 10:08:21.555248  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.555257  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:21.555270  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:21.555281  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:21.627890  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:21.627922  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:21.674689  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:21.674714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:21.747238  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:21.747267  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:21.763785  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:21.763813  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:21.844164  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:21.834312    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.835883    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.836677    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838349    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838777    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:21.834312    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.835883    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.836677    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838349    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838777    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:24.345832  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:24.356414  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:24.356487  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:24.381314  285837 cri.go:89] found id: ""
	I1213 10:08:24.381340  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.381349  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:24.381356  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:24.381418  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:24.405581  285837 cri.go:89] found id: ""
	I1213 10:08:24.405606  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.405614  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:24.405621  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:24.405679  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:24.429873  285837 cri.go:89] found id: ""
	I1213 10:08:24.429895  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.429904  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:24.429911  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:24.429971  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:24.457573  285837 cri.go:89] found id: ""
	I1213 10:08:24.457600  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.457609  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:24.457616  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:24.457674  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:24.481838  285837 cri.go:89] found id: ""
	I1213 10:08:24.481865  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.481874  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:24.481880  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:24.481937  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:24.507009  285837 cri.go:89] found id: ""
	I1213 10:08:24.507034  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.507043  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:24.507049  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:24.507105  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:24.550665  285837 cri.go:89] found id: ""
	I1213 10:08:24.550687  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.550695  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:24.550702  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:24.550757  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:24.584765  285837 cri.go:89] found id: ""
	I1213 10:08:24.584787  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.584805  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:24.584815  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:24.584828  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:24.652249  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:24.643336    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.644158    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.645848    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.646388    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.648047    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:24.643336    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.644158    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.645848    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.646388    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.648047    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:24.652271  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:24.652285  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:24.677128  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:24.677161  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:24.705609  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:24.705635  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:24.761364  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:24.761399  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:26.371661  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:08:26.432065  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:08:26.432188  285837 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:08:27.285248  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:27.295647  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:27.295723  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:27.320532  285837 cri.go:89] found id: ""
	I1213 10:08:27.320555  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.320564  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:27.320570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:27.320628  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:27.344722  285837 cri.go:89] found id: ""
	I1213 10:08:27.344748  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.344758  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:27.344764  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:27.344852  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:27.370726  285837 cri.go:89] found id: ""
	I1213 10:08:27.370751  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.370760  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:27.370766  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:27.370849  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:27.394557  285837 cri.go:89] found id: ""
	I1213 10:08:27.394583  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.394617  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:27.394628  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:27.394703  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:27.418575  285837 cri.go:89] found id: ""
	I1213 10:08:27.418601  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.418610  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:27.418616  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:27.418673  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:27.444932  285837 cri.go:89] found id: ""
	I1213 10:08:27.444953  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.444962  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:27.444968  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:27.445029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:27.468135  285837 cri.go:89] found id: ""
	I1213 10:08:27.468213  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.468237  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:27.468256  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:27.468330  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:27.493054  285837 cri.go:89] found id: ""
	I1213 10:08:27.493079  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.493089  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:27.493098  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:27.493126  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:27.555066  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:27.555141  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:27.572569  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:27.572644  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:27.641611  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:27.634328    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.634722    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.635977    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.636303    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.637740    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:27.634328    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.634722    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.635977    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.636303    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.637740    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:27.641682  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:27.641704  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:27.667653  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:27.667690  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:29.393883  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:08:29.454286  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:08:29.454393  285837 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:08:30.208961  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:30.219829  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:30.219950  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:30.248442  285837 cri.go:89] found id: ""
	I1213 10:08:30.248471  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.248480  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:30.248486  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:30.248569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:30.273935  285837 cri.go:89] found id: ""
	I1213 10:08:30.273964  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.273973  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:30.273979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:30.274067  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:30.299229  285837 cri.go:89] found id: ""
	I1213 10:08:30.299256  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.299265  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:30.299271  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:30.299328  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:30.327770  285837 cri.go:89] found id: ""
	I1213 10:08:30.327792  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.327801  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:30.327807  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:30.327863  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:30.352796  285837 cri.go:89] found id: ""
	I1213 10:08:30.352851  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.352861  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:30.352867  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:30.352928  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:30.376505  285837 cri.go:89] found id: ""
	I1213 10:08:30.376530  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.376539  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:30.376546  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:30.376646  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:30.400512  285837 cri.go:89] found id: ""
	I1213 10:08:30.400536  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.400545  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:30.400551  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:30.400611  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:30.425139  285837 cri.go:89] found id: ""
	I1213 10:08:30.425162  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.425171  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:30.425181  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:30.425192  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:30.454686  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:30.454713  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:30.509531  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:30.509568  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:30.527699  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:30.527727  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:30.597883  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:30.589727    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.590173    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.591765    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.592345    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.594029    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:30.589727    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.590173    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.591765    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.592345    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.594029    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:30.597907  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:30.597920  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:33.123638  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:33.134229  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:33.134302  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:33.161169  285837 cri.go:89] found id: ""
	I1213 10:08:33.161201  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.161210  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:33.161218  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:33.161278  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:33.189591  285837 cri.go:89] found id: ""
	I1213 10:08:33.189614  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.189623  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:33.189629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:33.189691  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:33.213288  285837 cri.go:89] found id: ""
	I1213 10:08:33.213315  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.213325  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:33.213331  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:33.213388  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:33.237186  285837 cri.go:89] found id: ""
	I1213 10:08:33.237214  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.237223  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:33.237230  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:33.237291  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:33.265589  285837 cri.go:89] found id: ""
	I1213 10:08:33.265615  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.265623  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:33.265629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:33.265687  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:33.289791  285837 cri.go:89] found id: ""
	I1213 10:08:33.289862  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.289884  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:33.289902  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:33.289986  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:33.314058  285837 cri.go:89] found id: ""
	I1213 10:08:33.314085  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.314094  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:33.314099  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:33.314170  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:33.338463  285837 cri.go:89] found id: ""
	I1213 10:08:33.338490  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.338499  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:33.338509  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:33.338521  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:33.393919  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:33.393953  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:33.407152  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:33.407179  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:33.470838  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:33.463012    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.463421    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465099    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465564    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.466988    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:33.463012    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.463421    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465099    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465564    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.466988    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:33.470862  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:33.470875  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:33.495641  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:33.495672  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:36.035663  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:36.047578  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:36.047649  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:36.076122  285837 cri.go:89] found id: ""
	I1213 10:08:36.076145  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.076154  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:36.076160  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:36.076236  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:36.105524  285837 cri.go:89] found id: ""
	I1213 10:08:36.105554  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.105564  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:36.105570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:36.105629  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:36.134491  285837 cri.go:89] found id: ""
	I1213 10:08:36.134565  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.134587  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:36.134607  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:36.134695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:36.159376  285837 cri.go:89] found id: ""
	I1213 10:08:36.159449  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.159471  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:36.159489  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:36.159608  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:36.185490  285837 cri.go:89] found id: ""
	I1213 10:08:36.185565  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.185590  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:36.185604  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:36.185676  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:36.219394  285837 cri.go:89] found id: ""
	I1213 10:08:36.219422  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.219431  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:36.219438  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:36.219494  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:36.243333  285837 cri.go:89] found id: ""
	I1213 10:08:36.243357  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.243367  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:36.243373  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:36.243435  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:36.267160  285837 cri.go:89] found id: ""
	I1213 10:08:36.267187  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.267196  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:36.267206  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:36.267218  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:36.280345  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:36.280375  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:36.343250  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:36.335308    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.335892    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337458    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337934    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.339453    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:36.335308    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.335892    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337458    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337934    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.339453    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:36.343272  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:36.343284  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:36.368575  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:36.368610  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:36.395546  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:36.395573  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:38.955916  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:38.966663  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:38.966732  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:38.991698  285837 cri.go:89] found id: ""
	I1213 10:08:38.991722  285837 logs.go:282] 0 containers: []
	W1213 10:08:38.991730  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:38.991737  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:38.991795  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:39.029472  285837 cri.go:89] found id: ""
	I1213 10:08:39.029501  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.029510  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:39.029515  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:39.029610  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:39.058052  285837 cri.go:89] found id: ""
	I1213 10:08:39.058082  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.058097  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:39.058104  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:39.058165  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:39.086309  285837 cri.go:89] found id: ""
	I1213 10:08:39.086331  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.086339  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:39.086345  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:39.086407  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:39.113392  285837 cri.go:89] found id: ""
	I1213 10:08:39.113420  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.113430  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:39.113436  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:39.113497  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:39.138083  285837 cri.go:89] found id: ""
	I1213 10:08:39.138109  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.138118  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:39.138125  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:39.138182  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:39.162132  285837 cri.go:89] found id: ""
	I1213 10:08:39.162160  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.162170  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:39.162176  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:39.162239  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:39.190634  285837 cri.go:89] found id: ""
	I1213 10:08:39.190661  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.190670  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:39.190679  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:39.190691  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:39.215694  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:39.215727  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:39.246161  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:39.246189  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:39.305962  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:39.305996  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:39.319717  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:39.319744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:39.382189  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:39.374352    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.374762    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376328    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376646    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.378097    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:39.374352    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.374762    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376328    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376646    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.378097    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:41.883328  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:41.894154  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:41.894228  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:41.921476  285837 cri.go:89] found id: ""
	I1213 10:08:41.921500  285837 logs.go:282] 0 containers: []
	W1213 10:08:41.921509  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:41.921515  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:41.921573  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:41.945812  285837 cri.go:89] found id: ""
	I1213 10:08:41.945835  285837 logs.go:282] 0 containers: []
	W1213 10:08:41.945843  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:41.945849  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:41.945912  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:41.946276  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:08:41.977805  285837 cri.go:89] found id: ""
	I1213 10:08:41.977840  285837 logs.go:282] 0 containers: []
	W1213 10:08:41.977849  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:41.977855  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:41.977923  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1213 10:08:42.037880  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:08:42.037998  285837 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:08:42.038333  285837 cri.go:89] found id: ""
	I1213 10:08:42.038351  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.038357  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:42.038364  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:42.038439  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:42.041174  285837 out.go:179] * Enabled addons: 
	I1213 10:08:42.044041  285837 addons.go:530] duration metric: took 1m50.679416537s for enable addons: enabled=[]
	I1213 10:08:42.069124  285837 cri.go:89] found id: ""
	I1213 10:08:42.069158  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.069173  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:42.069181  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:42.069277  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:42.114076  285837 cri.go:89] found id: ""
	I1213 10:08:42.114106  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.114119  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:42.114129  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:42.114201  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:42.143501  285837 cri.go:89] found id: ""
	I1213 10:08:42.143577  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.143587  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:42.143594  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:42.143665  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:42.174231  285837 cri.go:89] found id: ""
	I1213 10:08:42.174258  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.174267  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:42.174278  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:42.174291  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:42.209465  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:42.209500  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:42.270663  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:42.270702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:42.286732  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:42.286769  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:42.356785  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:42.348392    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.349131    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.350759    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.351267    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.352901    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:42.348392    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.349131    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.350759    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.351267    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.352901    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:42.356809  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:42.356822  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:44.882858  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:44.893320  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:44.893392  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:44.918585  285837 cri.go:89] found id: ""
	I1213 10:08:44.918612  285837 logs.go:282] 0 containers: []
	W1213 10:08:44.918621  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:44.918628  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:44.918686  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:44.943719  285837 cri.go:89] found id: ""
	I1213 10:08:44.943746  285837 logs.go:282] 0 containers: []
	W1213 10:08:44.943755  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:44.943762  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:44.943822  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:44.968177  285837 cri.go:89] found id: ""
	I1213 10:08:44.968204  285837 logs.go:282] 0 containers: []
	W1213 10:08:44.968213  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:44.968219  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:44.968273  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:45.012025  285837 cri.go:89] found id: ""
	I1213 10:08:45.012052  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.012062  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:45.012069  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:45.012140  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:45.059717  285837 cri.go:89] found id: ""
	I1213 10:08:45.059815  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.059841  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:45.059864  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:45.059985  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:45.146429  285837 cri.go:89] found id: ""
	I1213 10:08:45.146507  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.146534  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:45.146585  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:45.146680  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:45.192650  285837 cri.go:89] found id: ""
	I1213 10:08:45.192683  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.192695  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:45.192704  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:45.192786  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:45.240936  285837 cri.go:89] found id: ""
	I1213 10:08:45.241266  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.241306  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:45.241344  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:45.241423  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:45.280178  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:45.280250  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:45.343980  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:45.344023  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:45.357799  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:45.357833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:45.421366  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:45.413373    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.413837    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415405    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415812    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.417291    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:45.413373    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.413837    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415405    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415812    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.417291    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:45.421390  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:45.421403  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:47.952239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:47.963745  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:47.963816  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:47.989230  285837 cri.go:89] found id: ""
	I1213 10:08:47.989253  285837 logs.go:282] 0 containers: []
	W1213 10:08:47.989262  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:47.989288  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:47.989360  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:48.018062  285837 cri.go:89] found id: ""
	I1213 10:08:48.018087  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.018096  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:48.018102  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:48.018165  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:48.049042  285837 cri.go:89] found id: ""
	I1213 10:08:48.049068  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.049078  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:48.049084  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:48.049147  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:48.077924  285837 cri.go:89] found id: ""
	I1213 10:08:48.077946  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.077955  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:48.077965  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:48.078023  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:48.106258  285837 cri.go:89] found id: ""
	I1213 10:08:48.106284  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.106292  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:48.106298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:48.106355  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:48.130836  285837 cri.go:89] found id: ""
	I1213 10:08:48.130861  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.130869  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:48.130883  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:48.130945  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:48.157446  285837 cri.go:89] found id: ""
	I1213 10:08:48.157470  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.157479  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:48.157485  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:48.157543  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:48.182657  285837 cri.go:89] found id: ""
	I1213 10:08:48.182687  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.182697  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:48.182707  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:48.182719  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:48.196607  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:48.196685  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:48.261824  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:48.252959    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.253768    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.255478    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.256212    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.257905    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:48.252959    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.253768    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.255478    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.256212    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.257905    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:48.261895  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:48.261914  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:48.287393  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:48.287436  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:48.318617  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:48.318647  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:50.875656  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:50.886169  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:50.886240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:50.910775  285837 cri.go:89] found id: ""
	I1213 10:08:50.910801  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.910810  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:50.910817  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:50.910874  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:50.936159  285837 cri.go:89] found id: ""
	I1213 10:08:50.936185  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.936194  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:50.936200  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:50.936262  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:50.960845  285837 cri.go:89] found id: ""
	I1213 10:08:50.960879  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.960888  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:50.960895  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:50.960956  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:50.989232  285837 cri.go:89] found id: ""
	I1213 10:08:50.989262  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.989271  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:50.989277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:50.989361  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:51.017908  285837 cri.go:89] found id: ""
	I1213 10:08:51.017936  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.017944  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:51.017950  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:51.018012  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:51.062320  285837 cri.go:89] found id: ""
	I1213 10:08:51.062355  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.062363  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:51.062369  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:51.062436  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:51.091004  285837 cri.go:89] found id: ""
	I1213 10:08:51.091038  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.091047  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:51.091053  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:51.091118  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:51.116510  285837 cri.go:89] found id: ""
	I1213 10:08:51.116543  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.116552  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:51.116561  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:51.116574  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:51.147665  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:51.147690  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:51.203425  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:51.203457  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:51.216632  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:51.216657  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:51.278157  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:51.270057    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.270725    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272274    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272754    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.274265    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:51.270057    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.270725    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272274    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272754    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.274265    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:51.278181  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:51.278195  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:53.804075  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:53.815823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:53.815894  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:53.841157  285837 cri.go:89] found id: ""
	I1213 10:08:53.841180  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.841189  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:53.841195  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:53.841251  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:53.869816  285837 cri.go:89] found id: ""
	I1213 10:08:53.869840  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.869850  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:53.869856  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:53.869916  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:53.893754  285837 cri.go:89] found id: ""
	I1213 10:08:53.893781  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.893789  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:53.893796  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:53.893856  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:53.917859  285837 cri.go:89] found id: ""
	I1213 10:08:53.917881  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.917890  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:53.917896  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:53.917957  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:53.941859  285837 cri.go:89] found id: ""
	I1213 10:08:53.941886  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.941895  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:53.941902  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:53.941964  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:53.969296  285837 cri.go:89] found id: ""
	I1213 10:08:53.969320  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.969329  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:53.969335  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:53.969392  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:53.993419  285837 cri.go:89] found id: ""
	I1213 10:08:53.993448  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.993458  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:53.993464  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:53.993520  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:54.026047  285837 cri.go:89] found id: ""
	I1213 10:08:54.026074  285837 logs.go:282] 0 containers: []
	W1213 10:08:54.026084  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:54.026094  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:54.026106  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:54.042132  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:54.042160  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:54.121343  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:54.112495    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.113229    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115109    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115795    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.117272    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:54.112495    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.113229    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115109    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115795    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.117272    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:54.121416  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:54.121439  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:54.146468  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:54.146502  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:54.173087  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:54.173114  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:56.730884  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:56.741016  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:56.741083  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:56.765437  285837 cri.go:89] found id: ""
	I1213 10:08:56.765461  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.765470  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:56.765476  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:56.765535  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:56.804701  285837 cri.go:89] found id: ""
	I1213 10:08:56.804725  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.804734  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:56.804740  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:56.804796  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:56.831548  285837 cri.go:89] found id: ""
	I1213 10:08:56.831573  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.831582  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:56.831588  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:56.831646  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:56.860131  285837 cri.go:89] found id: ""
	I1213 10:08:56.860154  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.860162  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:56.860169  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:56.860223  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:56.884508  285837 cri.go:89] found id: ""
	I1213 10:08:56.884532  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.884540  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:56.884547  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:56.884602  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:56.909197  285837 cri.go:89] found id: ""
	I1213 10:08:56.909223  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.909232  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:56.909238  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:56.909296  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:56.934089  285837 cri.go:89] found id: ""
	I1213 10:08:56.934110  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.934119  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:56.934126  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:56.934183  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:56.958725  285837 cri.go:89] found id: ""
	I1213 10:08:56.958745  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.958754  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:56.958764  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:56.958775  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:57.027824  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:57.017888    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.018557    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.020435    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.021275    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.023085    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:57.017888    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.018557    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.020435    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.021275    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.023085    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:57.027846  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:57.027859  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:57.054139  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:57.054169  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:57.085873  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:57.085903  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:57.144978  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:57.145011  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:59.659171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:59.669569  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:59.669639  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:59.695058  285837 cri.go:89] found id: ""
	I1213 10:08:59.695123  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.695146  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:59.695163  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:59.695255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:59.720734  285837 cri.go:89] found id: ""
	I1213 10:08:59.720799  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.720822  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:59.720840  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:59.720935  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:59.744586  285837 cri.go:89] found id: ""
	I1213 10:08:59.744661  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.744684  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:59.744698  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:59.744770  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:59.771374  285837 cri.go:89] found id: ""
	I1213 10:08:59.771408  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.771417  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:59.771439  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:59.771541  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:59.799406  285837 cri.go:89] found id: ""
	I1213 10:08:59.799441  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.799450  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:59.799473  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:59.799577  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:59.828067  285837 cri.go:89] found id: ""
	I1213 10:08:59.828142  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.828165  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:59.828187  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:59.828255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:59.853064  285837 cri.go:89] found id: ""
	I1213 10:08:59.853130  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.853152  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:59.853174  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:59.853238  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:59.881735  285837 cri.go:89] found id: ""
	I1213 10:08:59.881772  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.881781  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:59.881790  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:59.881820  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:59.909551  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:59.909578  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:59.965746  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:59.965781  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:59.979378  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:59.979407  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:00.187890  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:00.174278    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.176237    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.177484    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.179716    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.181016    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:00.174278    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.176237    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.177484    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.179716    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.181016    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:00.187915  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:00.187930  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:02.742568  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:02.753251  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:02.753340  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:02.786726  285837 cri.go:89] found id: ""
	I1213 10:09:02.786749  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.786758  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:02.786764  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:02.786823  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:02.819145  285837 cri.go:89] found id: ""
	I1213 10:09:02.819166  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.819174  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:02.819193  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:02.819251  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:02.847100  285837 cri.go:89] found id: ""
	I1213 10:09:02.847124  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.847133  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:02.847139  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:02.847202  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:02.873292  285837 cri.go:89] found id: ""
	I1213 10:09:02.873316  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.873325  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:02.873332  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:02.873388  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:02.897520  285837 cri.go:89] found id: ""
	I1213 10:09:02.897544  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.897553  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:02.897560  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:02.897617  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:02.922393  285837 cri.go:89] found id: ""
	I1213 10:09:02.922416  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.922425  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:02.922431  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:02.922490  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:02.947241  285837 cri.go:89] found id: ""
	I1213 10:09:02.947264  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.947272  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:02.947278  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:02.947335  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:02.972679  285837 cri.go:89] found id: ""
	I1213 10:09:02.972704  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.972713  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:02.972722  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:02.972733  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:03.034867  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:03.034909  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:03.052540  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:03.052570  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:03.128351  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:03.118813    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.119444    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121123    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121418    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.124196    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:03.118813    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.119444    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121123    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121418    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.124196    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:03.128373  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:03.128386  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:03.154970  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:03.155008  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:05.683571  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:05.693787  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:05.693854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:05.718259  285837 cri.go:89] found id: ""
	I1213 10:09:05.718282  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.718291  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:05.718297  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:05.718357  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:05.745891  285837 cri.go:89] found id: ""
	I1213 10:09:05.745915  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.745924  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:05.745931  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:05.745987  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:05.782435  285837 cri.go:89] found id: ""
	I1213 10:09:05.782460  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.782469  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:05.782475  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:05.782530  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:05.814908  285837 cri.go:89] found id: ""
	I1213 10:09:05.814951  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.814962  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:05.814969  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:05.815039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:05.841933  285837 cri.go:89] found id: ""
	I1213 10:09:05.841961  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.841971  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:05.841978  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:05.842039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:05.866012  285837 cri.go:89] found id: ""
	I1213 10:09:05.866041  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.866050  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:05.866056  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:05.866115  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:05.890279  285837 cri.go:89] found id: ""
	I1213 10:09:05.890307  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.890315  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:05.890322  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:05.890379  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:05.915405  285837 cri.go:89] found id: ""
	I1213 10:09:05.915428  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.915436  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:05.915446  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:05.915457  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:05.971454  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:05.971486  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:05.984906  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:05.984951  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:06.083616  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:06.064999    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.075679    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.076130    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.077726    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.078044    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:06.064999    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.075679    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.076130    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.077726    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.078044    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:06.083701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:06.083737  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:06.114405  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:06.114443  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:08.641977  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:08.652131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:08.652197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:08.675938  285837 cri.go:89] found id: ""
	I1213 10:09:08.675961  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.675970  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:08.675976  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:08.676038  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:08.702206  285837 cri.go:89] found id: ""
	I1213 10:09:08.702281  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.702304  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:08.702321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:08.702400  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:08.726527  285837 cri.go:89] found id: ""
	I1213 10:09:08.726599  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.726621  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:08.726639  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:08.726726  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:08.751396  285837 cri.go:89] found id: ""
	I1213 10:09:08.751469  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.751492  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:08.751555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:08.751631  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:08.787796  285837 cri.go:89] found id: ""
	I1213 10:09:08.787828  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.787838  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:08.787844  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:08.787908  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:08.819599  285837 cri.go:89] found id: ""
	I1213 10:09:08.819634  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.819643  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:08.819650  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:08.819717  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:08.846345  285837 cri.go:89] found id: ""
	I1213 10:09:08.846372  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.846381  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:08.846387  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:08.846445  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:08.870594  285837 cri.go:89] found id: ""
	I1213 10:09:08.870664  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.870710  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:08.870746  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:08.870797  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:08.928780  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:08.928814  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:08.944017  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:08.944043  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:09.014860  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:09.004450    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.005493    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008097    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008783    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.010658    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:09.004450    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.005493    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008097    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008783    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.010658    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:09.014883  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:09.014896  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:09.047081  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:09.047174  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:11.588198  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:11.600902  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:11.600973  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:11.629261  285837 cri.go:89] found id: ""
	I1213 10:09:11.629286  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.629295  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:11.629301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:11.629362  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:11.653238  285837 cri.go:89] found id: ""
	I1213 10:09:11.653260  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.653269  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:11.653275  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:11.653332  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:11.681922  285837 cri.go:89] found id: ""
	I1213 10:09:11.681946  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.681956  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:11.681962  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:11.682019  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:11.711733  285837 cri.go:89] found id: ""
	I1213 10:09:11.711762  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.711770  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:11.711776  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:11.711834  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:11.736582  285837 cri.go:89] found id: ""
	I1213 10:09:11.736608  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.736616  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:11.736625  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:11.736681  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:11.759927  285837 cri.go:89] found id: ""
	I1213 10:09:11.759951  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.759961  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:11.759967  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:11.760022  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:11.794760  285837 cri.go:89] found id: ""
	I1213 10:09:11.794787  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.794797  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:11.794803  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:11.794862  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:11.822009  285837 cri.go:89] found id: ""
	I1213 10:09:11.822037  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.822047  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:11.822056  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:11.822068  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:11.889206  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:11.881444    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882052    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882987    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.883619    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.885240    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:11.881444    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882052    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882987    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.883619    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.885240    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:11.889228  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:11.889241  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:11.914544  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:11.914576  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:11.944548  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:11.944576  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:12.000427  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:12.000460  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:14.516876  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:14.527580  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:14.527657  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:14.551881  285837 cri.go:89] found id: ""
	I1213 10:09:14.551903  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.551911  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:14.551917  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:14.551977  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:14.576244  285837 cri.go:89] found id: ""
	I1213 10:09:14.576267  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.576275  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:14.576281  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:14.576337  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:14.604979  285837 cri.go:89] found id: ""
	I1213 10:09:14.605002  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.605011  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:14.605017  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:14.605084  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:14.633024  285837 cri.go:89] found id: ""
	I1213 10:09:14.633050  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.633059  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:14.633065  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:14.633123  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:14.661288  285837 cri.go:89] found id: ""
	I1213 10:09:14.661316  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.661324  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:14.661331  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:14.661390  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:14.686665  285837 cri.go:89] found id: ""
	I1213 10:09:14.686694  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.686704  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:14.686711  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:14.686769  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:14.712111  285837 cri.go:89] found id: ""
	I1213 10:09:14.712139  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.712148  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:14.712156  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:14.712212  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:14.740346  285837 cri.go:89] found id: ""
	I1213 10:09:14.740392  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.740401  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:14.740410  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:14.740423  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:14.753460  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:14.753488  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:14.834789  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:14.826269    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.827206    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.828874    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.829190    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.830588    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:14.826269    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.827206    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.828874    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.829190    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.830588    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:14.834812  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:14.834824  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:14.859634  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:14.859666  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:14.890753  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:14.890826  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:17.450898  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:17.461075  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:17.461145  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:17.486593  285837 cri.go:89] found id: ""
	I1213 10:09:17.486616  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.486625  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:17.486632  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:17.486689  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:17.511138  285837 cri.go:89] found id: ""
	I1213 10:09:17.511214  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.511230  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:17.511237  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:17.511302  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:17.535780  285837 cri.go:89] found id: ""
	I1213 10:09:17.535808  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.535818  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:17.535824  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:17.535879  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:17.559884  285837 cri.go:89] found id: ""
	I1213 10:09:17.559907  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.559916  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:17.559922  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:17.559983  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:17.588420  285837 cri.go:89] found id: ""
	I1213 10:09:17.588446  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.588456  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:17.588462  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:17.588520  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:17.616357  285837 cri.go:89] found id: ""
	I1213 10:09:17.616427  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.616450  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:17.616470  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:17.616553  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:17.640411  285837 cri.go:89] found id: ""
	I1213 10:09:17.640481  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.640506  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:17.640525  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:17.640606  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:17.670821  285837 cri.go:89] found id: ""
	I1213 10:09:17.670887  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.670910  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:17.670931  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:17.670976  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:17.730483  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:17.730517  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:17.743937  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:17.743965  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:17.835718  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:17.824527    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.825248    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.826849    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.827326    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.828934    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:17.824527    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.825248    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.826849    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.827326    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.828934    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:17.835789  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:17.835817  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:17.865207  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:17.865241  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:20.392780  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:20.403097  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:20.403162  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:20.428028  285837 cri.go:89] found id: ""
	I1213 10:09:20.428060  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.428069  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:20.428076  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:20.428141  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:20.452273  285837 cri.go:89] found id: ""
	I1213 10:09:20.452297  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.452305  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:20.452312  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:20.452375  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:20.476828  285837 cri.go:89] found id: ""
	I1213 10:09:20.476852  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.476860  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:20.476866  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:20.476922  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:20.500929  285837 cri.go:89] found id: ""
	I1213 10:09:20.500952  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.500968  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:20.500975  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:20.501033  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:20.528180  285837 cri.go:89] found id: ""
	I1213 10:09:20.528207  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.528217  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:20.528223  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:20.528284  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:20.553290  285837 cri.go:89] found id: ""
	I1213 10:09:20.553314  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.553323  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:20.553330  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:20.553386  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:20.577422  285837 cri.go:89] found id: ""
	I1213 10:09:20.577446  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.577455  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:20.577464  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:20.577518  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:20.601597  285837 cri.go:89] found id: ""
	I1213 10:09:20.601623  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.601632  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:20.601643  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:20.601654  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:20.656521  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:20.656556  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:20.669890  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:20.669920  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:20.737784  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:20.729553    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.730242    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.731915    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.732434    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.734060    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:20.729553    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.730242    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.731915    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.732434    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.734060    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:20.737806  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:20.737818  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:20.762811  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:20.762845  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:23.299625  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:23.311059  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:23.311129  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:23.338174  285837 cri.go:89] found id: ""
	I1213 10:09:23.338197  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.338205  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:23.338211  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:23.338269  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:23.363653  285837 cri.go:89] found id: ""
	I1213 10:09:23.363674  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.363683  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:23.363688  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:23.363750  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:23.387166  285837 cri.go:89] found id: ""
	I1213 10:09:23.387187  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.387195  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:23.387201  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:23.387257  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:23.411627  285837 cri.go:89] found id: ""
	I1213 10:09:23.411650  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.411659  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:23.411665  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:23.411731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:23.440839  285837 cri.go:89] found id: ""
	I1213 10:09:23.440866  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.440885  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:23.440892  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:23.440950  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:23.464835  285837 cri.go:89] found id: ""
	I1213 10:09:23.464857  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.464866  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:23.464872  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:23.464927  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:23.489635  285837 cri.go:89] found id: ""
	I1213 10:09:23.489659  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.489668  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:23.489675  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:23.489762  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:23.513816  285837 cri.go:89] found id: ""
	I1213 10:09:23.513847  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.513855  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:23.513865  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:23.513875  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:23.539139  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:23.539173  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:23.565435  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:23.565463  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:23.622023  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:23.622058  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:23.635231  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:23.635263  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:23.699057  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:23.690976    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.691550    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693329    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693735    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.695223    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:23.690976    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.691550    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693329    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693735    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.695223    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:26.200117  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:26.210617  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:26.210696  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:26.235048  285837 cri.go:89] found id: ""
	I1213 10:09:26.235076  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.235085  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:26.235092  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:26.235148  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:26.259259  285837 cri.go:89] found id: ""
	I1213 10:09:26.259285  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.259294  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:26.259300  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:26.259355  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:26.291742  285837 cri.go:89] found id: ""
	I1213 10:09:26.291767  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.291776  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:26.291782  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:26.291864  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:26.320200  285837 cri.go:89] found id: ""
	I1213 10:09:26.320225  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.320234  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:26.320240  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:26.320296  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:26.347996  285837 cri.go:89] found id: ""
	I1213 10:09:26.348023  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.348033  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:26.348039  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:26.348097  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:26.376752  285837 cri.go:89] found id: ""
	I1213 10:09:26.376816  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.376830  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:26.376837  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:26.376893  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:26.404777  285837 cri.go:89] found id: ""
	I1213 10:09:26.404802  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.404811  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:26.404817  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:26.404876  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:26.428882  285837 cri.go:89] found id: ""
	I1213 10:09:26.428904  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.428913  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:26.428922  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:26.428933  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:26.489455  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:26.489494  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:26.504291  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:26.504320  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:26.573661  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:26.564906    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.565725    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567441    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567990    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.569686    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:26.564906    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.565725    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567441    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567990    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.569686    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:26.573684  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:26.573698  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:26.599463  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:26.599496  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:29.127681  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:29.138010  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:29.138081  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:29.161918  285837 cri.go:89] found id: ""
	I1213 10:09:29.161989  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.162013  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:29.162031  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:29.162114  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:29.186603  285837 cri.go:89] found id: ""
	I1213 10:09:29.186678  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.186700  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:29.186717  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:29.186798  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:29.210425  285837 cri.go:89] found id: ""
	I1213 10:09:29.210489  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.210512  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:29.210529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:29.210614  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:29.237345  285837 cri.go:89] found id: ""
	I1213 10:09:29.237369  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.237377  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:29.237384  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:29.237440  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:29.260918  285837 cri.go:89] found id: ""
	I1213 10:09:29.260997  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.261013  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:29.261020  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:29.261075  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:29.289712  285837 cri.go:89] found id: ""
	I1213 10:09:29.289738  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.289747  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:29.289753  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:29.289808  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:29.321797  285837 cri.go:89] found id: ""
	I1213 10:09:29.321821  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.321831  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:29.321839  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:29.321895  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:29.353498  285837 cri.go:89] found id: ""
	I1213 10:09:29.353523  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.353532  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:29.353542  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:29.353582  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:29.415160  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:29.407188    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.407994    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409598    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409900    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.411394    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:29.407188    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.407994    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409598    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409900    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.411394    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:29.415183  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:29.415198  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:29.440924  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:29.440961  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:29.468916  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:29.468944  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:29.528468  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:29.528501  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:32.042457  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:32.054480  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:32.054563  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:32.088256  285837 cri.go:89] found id: ""
	I1213 10:09:32.088282  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.088290  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:32.088296  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:32.088382  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:32.114080  285837 cri.go:89] found id: ""
	I1213 10:09:32.114102  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.114110  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:32.114116  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:32.114195  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:32.138708  285837 cri.go:89] found id: ""
	I1213 10:09:32.138732  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.138740  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:32.138746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:32.138851  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:32.163676  285837 cri.go:89] found id: ""
	I1213 10:09:32.163706  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.163715  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:32.163721  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:32.163780  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:32.188486  285837 cri.go:89] found id: ""
	I1213 10:09:32.188565  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.188582  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:32.188589  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:32.188652  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:32.212912  285837 cri.go:89] found id: ""
	I1213 10:09:32.212936  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.212945  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:32.212951  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:32.213034  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:32.242067  285837 cri.go:89] found id: ""
	I1213 10:09:32.242090  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.242099  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:32.242106  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:32.242163  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:32.280832  285837 cri.go:89] found id: ""
	I1213 10:09:32.280855  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.280864  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:32.280874  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:32.280885  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:32.344925  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:32.344963  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:32.359370  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:32.359400  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:32.425438  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:32.416924    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.417557    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419154    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419748    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.421439    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:32.416924    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.417557    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419154    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419748    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.421439    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:32.425459  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:32.425472  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:32.449956  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:32.449990  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:34.978245  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:34.989159  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:34.989236  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:35.017235  285837 cri.go:89] found id: ""
	I1213 10:09:35.017258  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.017267  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:35.017273  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:35.017341  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:35.050437  285837 cri.go:89] found id: ""
	I1213 10:09:35.050458  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.050467  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:35.050473  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:35.050529  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:35.085905  285837 cri.go:89] found id: ""
	I1213 10:09:35.085926  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.085935  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:35.085941  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:35.085994  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:35.118261  285837 cri.go:89] found id: ""
	I1213 10:09:35.118283  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.118292  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:35.118299  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:35.118360  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:35.144531  285837 cri.go:89] found id: ""
	I1213 10:09:35.144555  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.144563  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:35.144569  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:35.144627  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:35.170241  285837 cri.go:89] found id: ""
	I1213 10:09:35.170317  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.170340  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:35.170359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:35.170433  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:35.195958  285837 cri.go:89] found id: ""
	I1213 10:09:35.195986  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.195995  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:35.196001  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:35.196066  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:35.220509  285837 cri.go:89] found id: ""
	I1213 10:09:35.220535  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.220544  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:35.220553  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:35.220563  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:35.276863  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:35.277042  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:35.294239  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:35.294265  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:35.367085  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:35.358803    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.359461    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361052    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361604    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.363156    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:35.358803    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.359461    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361052    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361604    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.363156    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:35.367108  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:35.367121  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:35.392804  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:35.392842  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:37.919692  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:37.929805  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:37.929875  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:37.954708  285837 cri.go:89] found id: ""
	I1213 10:09:37.954782  285837 logs.go:282] 0 containers: []
	W1213 10:09:37.954806  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:37.954825  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:37.954914  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:37.979259  285837 cri.go:89] found id: ""
	I1213 10:09:37.979332  285837 logs.go:282] 0 containers: []
	W1213 10:09:37.979357  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:37.979375  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:37.979459  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:38.008473  285837 cri.go:89] found id: ""
	I1213 10:09:38.008554  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.008579  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:38.008597  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:38.008695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:38.051746  285837 cri.go:89] found id: ""
	I1213 10:09:38.051820  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.051843  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:38.051863  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:38.051957  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:38.082373  285837 cri.go:89] found id: ""
	I1213 10:09:38.082405  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.082413  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:38.082419  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:38.082477  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:38.109623  285837 cri.go:89] found id: ""
	I1213 10:09:38.109646  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.109655  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:38.109661  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:38.109718  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:38.133779  285837 cri.go:89] found id: ""
	I1213 10:09:38.133807  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.133815  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:38.133822  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:38.133892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:38.158199  285837 cri.go:89] found id: ""
	I1213 10:09:38.158263  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.158286  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:38.158338  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:38.158371  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:38.171856  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:38.171885  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:38.237998  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:38.230230    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.230727    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232222    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232668    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.234110    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:38.230230    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.230727    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232222    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232668    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.234110    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:38.238021  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:38.238033  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:38.263694  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:38.263729  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:38.301569  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:38.301594  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:40.863927  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:40.874647  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:40.874715  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:40.902898  285837 cri.go:89] found id: ""
	I1213 10:09:40.902922  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.902931  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:40.902939  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:40.903000  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:40.928251  285837 cri.go:89] found id: ""
	I1213 10:09:40.928277  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.928287  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:40.928294  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:40.928350  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:40.952178  285837 cri.go:89] found id: ""
	I1213 10:09:40.952201  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.952210  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:40.952216  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:40.952271  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:40.980522  285837 cri.go:89] found id: ""
	I1213 10:09:40.980548  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.980557  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:40.980564  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:40.980620  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:41.007391  285837 cri.go:89] found id: ""
	I1213 10:09:41.007417  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.007427  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:41.007433  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:41.007498  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:41.056690  285837 cri.go:89] found id: ""
	I1213 10:09:41.056762  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.056786  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:41.056806  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:41.056892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:41.082372  285837 cri.go:89] found id: ""
	I1213 10:09:41.082443  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.082481  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:41.082505  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:41.082592  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:41.106556  285837 cri.go:89] found id: ""
	I1213 10:09:41.106626  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.106648  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:41.106680  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:41.106722  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:41.162248  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:41.162281  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:41.175724  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:41.175753  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:41.243327  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:41.234749    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.235634    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237273    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237827    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.239427    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:41.234749    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.235634    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237273    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237827    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.239427    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:41.243393  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:41.243420  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:41.269060  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:41.269142  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:43.812670  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:43.823281  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:43.823360  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:43.846549  285837 cri.go:89] found id: ""
	I1213 10:09:43.846571  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.846579  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:43.846585  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:43.846640  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:43.879456  285837 cri.go:89] found id: ""
	I1213 10:09:43.879541  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.879557  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:43.879563  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:43.879632  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:43.904717  285837 cri.go:89] found id: ""
	I1213 10:09:43.904745  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.904755  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:43.904761  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:43.904818  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:43.929847  285837 cri.go:89] found id: ""
	I1213 10:09:43.929873  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.929883  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:43.929890  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:43.929950  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:43.954073  285837 cri.go:89] found id: ""
	I1213 10:09:43.954146  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.954168  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:43.954187  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:43.954278  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:43.979175  285837 cri.go:89] found id: ""
	I1213 10:09:43.979257  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.979280  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:43.979299  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:43.979406  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:44.013549  285837 cri.go:89] found id: ""
	I1213 10:09:44.013574  285837 logs.go:282] 0 containers: []
	W1213 10:09:44.013584  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:44.013590  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:44.013653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:44.043145  285837 cri.go:89] found id: ""
	I1213 10:09:44.043222  285837 logs.go:282] 0 containers: []
	W1213 10:09:44.043244  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:44.043267  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:44.043306  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:44.058657  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:44.058685  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:44.137763  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:44.128648    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.129161    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.130836    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.131881    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.132537    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:44.128648    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.129161    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.130836    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.131881    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.132537    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:44.137786  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:44.137799  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:44.163596  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:44.163630  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:44.193981  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:44.194008  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:46.751860  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:46.762578  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:46.762653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:46.787138  285837 cri.go:89] found id: ""
	I1213 10:09:46.787161  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.787170  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:46.787176  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:46.787234  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:46.812348  285837 cri.go:89] found id: ""
	I1213 10:09:46.812371  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.812379  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:46.812386  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:46.812445  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:46.840689  285837 cri.go:89] found id: ""
	I1213 10:09:46.840712  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.840721  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:46.840727  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:46.840784  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:46.870288  285837 cri.go:89] found id: ""
	I1213 10:09:46.870313  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.870322  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:46.870328  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:46.870450  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:46.896231  285837 cri.go:89] found id: ""
	I1213 10:09:46.896255  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.896269  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:46.896276  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:46.896334  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:46.921572  285837 cri.go:89] found id: ""
	I1213 10:09:46.921604  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.921613  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:46.921636  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:46.921721  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:46.948191  285837 cri.go:89] found id: ""
	I1213 10:09:46.948220  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.948229  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:46.948236  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:46.948365  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:46.977518  285837 cri.go:89] found id: ""
	I1213 10:09:46.977585  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.977602  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:46.977612  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:46.977624  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:47.034861  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:47.034901  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:47.049608  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:47.049638  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:47.120624  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:47.111262    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.111986    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.113725    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.114377    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.116185    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:47.111262    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.111986    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.113725    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.114377    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.116185    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:47.120648  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:47.120662  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:47.146083  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:47.146118  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:49.676188  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:49.688330  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:49.688400  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:49.714933  285837 cri.go:89] found id: ""
	I1213 10:09:49.714958  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.714967  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:49.714973  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:49.715035  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:49.739883  285837 cri.go:89] found id: ""
	I1213 10:09:49.739912  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.739923  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:49.739931  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:49.739990  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:49.768673  285837 cri.go:89] found id: ""
	I1213 10:09:49.768699  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.768718  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:49.768726  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:49.768788  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:49.794628  285837 cri.go:89] found id: ""
	I1213 10:09:49.794694  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.794717  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:49.794735  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:49.794822  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:49.819205  285837 cri.go:89] found id: ""
	I1213 10:09:49.819237  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.819247  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:49.819253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:49.819318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:49.843189  285837 cri.go:89] found id: ""
	I1213 10:09:49.843212  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.843228  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:49.843235  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:49.843303  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:49.867965  285837 cri.go:89] found id: ""
	I1213 10:09:49.867998  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.868008  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:49.868016  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:49.868089  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:49.891561  285837 cri.go:89] found id: ""
	I1213 10:09:49.891586  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.891595  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:49.891605  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:49.891629  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:49.953785  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:49.953824  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:49.967425  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:49.967453  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:50.041318  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:50.031798    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033081    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033890    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035607    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035911    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:50.031798    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033081    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033890    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035607    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035911    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:50.041391  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:50.041419  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:50.070955  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:50.071029  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:52.603479  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:52.615038  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:52.615113  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:52.643538  285837 cri.go:89] found id: ""
	I1213 10:09:52.643561  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.643570  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:52.643577  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:52.643636  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:52.668477  285837 cri.go:89] found id: ""
	I1213 10:09:52.668514  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.668523  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:52.668530  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:52.668586  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:52.695551  285837 cri.go:89] found id: ""
	I1213 10:09:52.695574  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.695582  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:52.695589  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:52.695647  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:52.723965  285837 cri.go:89] found id: ""
	I1213 10:09:52.723991  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.724000  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:52.724007  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:52.724061  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:52.748159  285837 cri.go:89] found id: ""
	I1213 10:09:52.748186  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.748195  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:52.748202  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:52.748257  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:52.771805  285837 cri.go:89] found id: ""
	I1213 10:09:52.771836  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.771846  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:52.771853  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:52.771910  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:52.795549  285837 cri.go:89] found id: ""
	I1213 10:09:52.795574  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.795584  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:52.795590  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:52.795650  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:52.819748  285837 cri.go:89] found id: ""
	I1213 10:09:52.819775  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.819785  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:52.819794  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:52.819805  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:52.882031  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:52.873734    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.874381    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876013    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876486    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.878274    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:52.873734    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.874381    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876013    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876486    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.878274    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:52.882051  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:52.882062  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:52.907759  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:52.907795  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:52.934360  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:52.934390  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:52.989946  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:52.989982  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:55.503671  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:55.514125  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:55.514196  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:55.540594  285837 cri.go:89] found id: ""
	I1213 10:09:55.540621  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.540631  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:55.540637  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:55.540694  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:55.570352  285837 cri.go:89] found id: ""
	I1213 10:09:55.570378  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.570387  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:55.570395  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:55.570450  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:55.596509  285837 cri.go:89] found id: ""
	I1213 10:09:55.596533  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.596541  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:55.596547  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:55.596604  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:55.622553  285837 cri.go:89] found id: ""
	I1213 10:09:55.622579  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.622587  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:55.622593  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:55.622650  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:55.647770  285837 cri.go:89] found id: ""
	I1213 10:09:55.647794  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.647803  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:55.647809  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:55.647874  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:55.672615  285837 cri.go:89] found id: ""
	I1213 10:09:55.672679  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.672693  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:55.672701  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:55.672756  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:55.697017  285837 cri.go:89] found id: ""
	I1213 10:09:55.697041  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.697050  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:55.697063  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:55.697123  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:55.720795  285837 cri.go:89] found id: ""
	I1213 10:09:55.720866  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.720891  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:55.720914  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:55.720950  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:55.745823  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:55.745857  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:55.774634  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:55.774663  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:55.830064  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:55.830098  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:55.843868  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:55.843896  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:55.905758  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:55.897123    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.897694    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899210    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899757    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.901522    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:55.897123    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.897694    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899210    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899757    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.901522    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:58.406072  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:58.418120  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:58.418199  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:58.443021  285837 cri.go:89] found id: ""
	I1213 10:09:58.443050  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.443059  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:58.443066  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:58.443126  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:58.468115  285837 cri.go:89] found id: ""
	I1213 10:09:58.468139  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.468147  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:58.468154  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:58.468214  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:58.496991  285837 cri.go:89] found id: ""
	I1213 10:09:58.497015  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.497025  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:58.497032  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:58.497098  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:58.530053  285837 cri.go:89] found id: ""
	I1213 10:09:58.530076  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.530085  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:58.530091  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:58.530149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:58.561990  285837 cri.go:89] found id: ""
	I1213 10:09:58.562013  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.562022  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:58.562028  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:58.562091  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:58.595912  285837 cri.go:89] found id: ""
	I1213 10:09:58.595984  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.596007  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:58.596026  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:58.596113  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:58.626521  285837 cri.go:89] found id: ""
	I1213 10:09:58.626593  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.626616  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:58.626635  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:58.626720  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:58.655898  285837 cri.go:89] found id: ""
	I1213 10:09:58.655963  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.655987  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:58.656008  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:58.656032  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:58.711709  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:58.711741  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:58.726942  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:58.726969  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:58.798293  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:58.790256    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.791061    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.792601    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.793006    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.794454    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:58.790256    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.791061    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.792601    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.793006    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.794454    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:58.798314  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:58.798327  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:58.822936  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:58.822973  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:01.351670  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:01.362442  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:01.362517  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:01.388700  285837 cri.go:89] found id: ""
	I1213 10:10:01.388734  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.388744  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:01.388751  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:01.388824  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:01.418393  285837 cri.go:89] found id: ""
	I1213 10:10:01.418471  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.418496  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:01.418515  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:01.418602  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:01.449860  285837 cri.go:89] found id: ""
	I1213 10:10:01.449937  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.449962  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:01.449980  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:01.450064  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:01.475973  285837 cri.go:89] found id: ""
	I1213 10:10:01.476035  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.476049  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:01.476056  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:01.476118  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:01.501452  285837 cri.go:89] found id: ""
	I1213 10:10:01.501474  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.501499  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:01.501506  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:01.501576  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:01.527738  285837 cri.go:89] found id: ""
	I1213 10:10:01.527808  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.527832  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:01.527852  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:01.527946  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:01.553256  285837 cri.go:89] found id: ""
	I1213 10:10:01.553280  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.553289  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:01.553296  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:01.553354  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:01.578833  285837 cri.go:89] found id: ""
	I1213 10:10:01.578855  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.578864  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:01.578875  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:01.578892  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:01.634755  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:01.634790  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:01.649799  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:01.649832  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:01.721470  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:01.711679    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.712530    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715342    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715994    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.717520    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:01.711679    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.712530    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715342    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715994    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.717520    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:01.721491  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:01.721504  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:01.747322  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:01.747357  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:04.288307  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:04.300683  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:04.300805  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:04.332215  285837 cri.go:89] found id: ""
	I1213 10:10:04.332242  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.332252  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:04.332259  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:04.332318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:04.358136  285837 cri.go:89] found id: ""
	I1213 10:10:04.358164  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.358173  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:04.358180  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:04.358248  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:04.383446  285837 cri.go:89] found id: ""
	I1213 10:10:04.383479  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.383488  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:04.383493  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:04.383578  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:04.408888  285837 cri.go:89] found id: ""
	I1213 10:10:04.408914  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.408923  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:04.408930  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:04.409009  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:04.438109  285837 cri.go:89] found id: ""
	I1213 10:10:04.438145  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.438155  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:04.438163  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:04.438233  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:04.462623  285837 cri.go:89] found id: ""
	I1213 10:10:04.462692  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.462725  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:04.462745  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:04.462826  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:04.488102  285837 cri.go:89] found id: ""
	I1213 10:10:04.488127  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.488137  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:04.488143  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:04.488230  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:04.515038  285837 cri.go:89] found id: ""
	I1213 10:10:04.515078  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.515087  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:04.515096  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:04.515134  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:04.540448  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:04.540483  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:04.570913  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:04.570942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:04.626396  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:04.626430  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:04.639908  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:04.639938  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:04.704410  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:04.696311    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.697396    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.698596    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.699329    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.700060    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:04.696311    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.697396    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.698596    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.699329    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.700060    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:07.204629  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:07.215001  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:07.215080  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:07.239145  285837 cri.go:89] found id: ""
	I1213 10:10:07.239170  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.239180  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:07.239186  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:07.239243  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:07.263051  285837 cri.go:89] found id: ""
	I1213 10:10:07.263077  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.263086  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:07.263092  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:07.263149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:07.293024  285837 cri.go:89] found id: ""
	I1213 10:10:07.293051  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.293060  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:07.293066  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:07.293142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:07.320096  285837 cri.go:89] found id: ""
	I1213 10:10:07.320119  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.320128  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:07.320133  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:07.320189  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:07.349635  285837 cri.go:89] found id: ""
	I1213 10:10:07.349661  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.349670  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:07.349676  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:07.349733  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:07.374644  285837 cri.go:89] found id: ""
	I1213 10:10:07.374720  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.374744  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:07.374767  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:07.374875  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:07.399088  285837 cri.go:89] found id: ""
	I1213 10:10:07.399108  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.399117  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:07.399123  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:07.399179  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:07.423187  285837 cri.go:89] found id: ""
	I1213 10:10:07.423210  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.423219  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:07.423229  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:07.423244  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:07.478648  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:07.478682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:07.492218  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:07.492247  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:07.558077  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:07.550399    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.550906    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552488    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552905    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.554328    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:07.550399    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.550906    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552488    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552905    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.554328    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:07.558147  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:07.558168  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:07.583061  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:07.583093  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:10.116593  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:10.127456  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:10.127551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:10.157660  285837 cri.go:89] found id: ""
	I1213 10:10:10.157684  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.157693  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:10.157699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:10.157758  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:10.183132  285837 cri.go:89] found id: ""
	I1213 10:10:10.183166  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.183175  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:10.183181  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:10.183248  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:10.209615  285837 cri.go:89] found id: ""
	I1213 10:10:10.209681  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.209704  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:10.209723  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:10.209817  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:10.234760  285837 cri.go:89] found id: ""
	I1213 10:10:10.234789  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.234798  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:10.234804  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:10.234877  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:10.261577  285837 cri.go:89] found id: ""
	I1213 10:10:10.261608  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.261618  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:10.261624  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:10.261682  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:10.289616  285837 cri.go:89] found id: ""
	I1213 10:10:10.289655  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.289664  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:10.289670  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:10.289742  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:10.316640  285837 cri.go:89] found id: ""
	I1213 10:10:10.316684  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.316693  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:10.316699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:10.316768  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:10.346038  285837 cri.go:89] found id: ""
	I1213 10:10:10.346065  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.346074  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:10.346084  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:10.346095  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:10.377589  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:10.377669  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:10.435680  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:10.435714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:10.449198  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:10.449226  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:10.521596  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:10.513247    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.514113    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.515874    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.516173    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.517662    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:10.513247    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.514113    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.515874    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.516173    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.517662    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:10.521619  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:10.521632  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:13.047644  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:13.059744  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:13.059820  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:13.087860  285837 cri.go:89] found id: ""
	I1213 10:10:13.087901  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.087911  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:13.087918  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:13.087983  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:13.112735  285837 cri.go:89] found id: ""
	I1213 10:10:13.112802  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.112844  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:13.112876  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:13.112953  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:13.141197  285837 cri.go:89] found id: ""
	I1213 10:10:13.141223  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.141244  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:13.141255  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:13.141315  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:13.165043  285837 cri.go:89] found id: ""
	I1213 10:10:13.165119  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.165143  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:13.165155  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:13.165240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:13.189664  285837 cri.go:89] found id: ""
	I1213 10:10:13.189746  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.189769  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:13.189782  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:13.189854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:13.213620  285837 cri.go:89] found id: ""
	I1213 10:10:13.213686  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.213709  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:13.213723  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:13.213799  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:13.241644  285837 cri.go:89] found id: ""
	I1213 10:10:13.241667  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.241676  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:13.241728  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:13.241812  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:13.265927  285837 cri.go:89] found id: ""
	I1213 10:10:13.265997  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.266030  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:13.266053  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:13.266079  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:13.293162  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:13.293239  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:13.326250  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:13.326334  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:13.386676  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:13.386710  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:13.400810  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:13.400838  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:13.469704  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:13.461055    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.462337    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.463891    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.464211    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.465542    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:13.461055    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.462337    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.463891    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.464211    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.465542    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:15.969962  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:15.980347  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:15.980492  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:16.010088  285837 cri.go:89] found id: ""
	I1213 10:10:16.010118  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.010127  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:16.010133  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:16.010196  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:16.049187  285837 cri.go:89] found id: ""
	I1213 10:10:16.049209  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.049217  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:16.049223  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:16.049291  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:16.077965  285837 cri.go:89] found id: ""
	I1213 10:10:16.077987  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.077996  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:16.078002  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:16.078058  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:16.108378  285837 cri.go:89] found id: ""
	I1213 10:10:16.108451  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.108474  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:16.108492  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:16.108577  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:16.134213  285837 cri.go:89] found id: ""
	I1213 10:10:16.134235  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.134244  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:16.134250  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:16.134310  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:16.160222  285837 cri.go:89] found id: ""
	I1213 10:10:16.160255  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.160266  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:16.160273  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:16.160343  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:16.188619  285837 cri.go:89] found id: ""
	I1213 10:10:16.188646  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.188655  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:16.188662  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:16.188725  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:16.213285  285837 cri.go:89] found id: ""
	I1213 10:10:16.213358  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.213375  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:16.213387  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:16.213398  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:16.241893  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:16.241922  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:16.298312  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:16.298349  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:16.312327  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:16.312403  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:16.384024  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:16.375836    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.376424    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.377950    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.378379    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.379873    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:16.375836    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.376424    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.377950    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.378379    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.379873    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:16.384050  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:16.384064  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:18.909524  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:18.920391  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:18.920459  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:18.945319  285837 cri.go:89] found id: ""
	I1213 10:10:18.945358  285837 logs.go:282] 0 containers: []
	W1213 10:10:18.945367  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:18.945374  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:18.945431  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:18.968360  285837 cri.go:89] found id: ""
	I1213 10:10:18.968381  285837 logs.go:282] 0 containers: []
	W1213 10:10:18.968390  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:18.968420  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:18.968476  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:18.992303  285837 cri.go:89] found id: ""
	I1213 10:10:18.992324  285837 logs.go:282] 0 containers: []
	W1213 10:10:18.992333  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:18.992339  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:18.992393  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:19.017601  285837 cri.go:89] found id: ""
	I1213 10:10:19.017677  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.017700  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:19.017718  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:19.017814  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:19.057563  285837 cri.go:89] found id: ""
	I1213 10:10:19.057636  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.057672  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:19.057695  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:19.057783  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:19.089906  285837 cri.go:89] found id: ""
	I1213 10:10:19.089929  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.089938  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:19.089944  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:19.090014  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:19.115237  285837 cri.go:89] found id: ""
	I1213 10:10:19.115258  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.115266  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:19.115272  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:19.115351  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:19.140000  285837 cri.go:89] found id: ""
	I1213 10:10:19.140067  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.140090  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:19.140112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:19.140150  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:19.201866  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:19.193342    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.193945    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195407    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195879    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.197366    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:19.193342    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.193945    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195407    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195879    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.197366    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:19.201888  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:19.201900  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:19.227103  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:19.227135  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:19.253635  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:19.253664  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:19.317211  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:19.317245  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:21.835317  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:21.848786  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:21.848905  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:21.873912  285837 cri.go:89] found id: ""
	I1213 10:10:21.873938  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.873947  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:21.873966  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:21.874030  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:21.898927  285837 cri.go:89] found id: ""
	I1213 10:10:21.898948  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.898957  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:21.898963  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:21.899017  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:21.928040  285837 cri.go:89] found id: ""
	I1213 10:10:21.928067  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.928076  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:21.928083  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:21.928139  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:21.952762  285837 cri.go:89] found id: ""
	I1213 10:10:21.952784  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.952793  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:21.952800  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:21.952862  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:21.977394  285837 cri.go:89] found id: ""
	I1213 10:10:21.977421  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.977430  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:21.977437  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:21.977502  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:22.001693  285837 cri.go:89] found id: ""
	I1213 10:10:22.001729  285837 logs.go:282] 0 containers: []
	W1213 10:10:22.001739  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:22.001746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:22.001813  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:22.044074  285837 cri.go:89] found id: ""
	I1213 10:10:22.044111  285837 logs.go:282] 0 containers: []
	W1213 10:10:22.044120  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:22.044126  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:22.044203  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:22.083324  285837 cri.go:89] found id: ""
	I1213 10:10:22.083361  285837 logs.go:282] 0 containers: []
	W1213 10:10:22.083370  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:22.083380  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:22.083392  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:22.152550  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:22.144496    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.145029    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.146843    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.147160    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.148682    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:22.144496    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.145029    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.146843    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.147160    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.148682    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:22.152574  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:22.152590  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:22.177867  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:22.177900  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:22.205266  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:22.205296  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:22.260906  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:22.260942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:24.776001  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:24.787300  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:24.787370  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:24.817822  285837 cri.go:89] found id: ""
	I1213 10:10:24.817967  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.817991  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:24.818032  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:24.818131  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:24.843042  285837 cri.go:89] found id: ""
	I1213 10:10:24.843079  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.843088  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:24.843094  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:24.843160  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:24.866977  285837 cri.go:89] found id: ""
	I1213 10:10:24.867012  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.867022  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:24.867029  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:24.867100  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:24.892141  285837 cri.go:89] found id: ""
	I1213 10:10:24.892167  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.892177  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:24.892183  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:24.892258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:24.922137  285837 cri.go:89] found id: ""
	I1213 10:10:24.922207  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.922230  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:24.922248  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:24.922343  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:24.954689  285837 cri.go:89] found id: ""
	I1213 10:10:24.954720  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.954729  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:24.954736  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:24.954802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:24.979305  285837 cri.go:89] found id: ""
	I1213 10:10:24.979379  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.979400  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:24.979420  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:24.979545  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:25.012112  285837 cri.go:89] found id: ""
	I1213 10:10:25.012139  285837 logs.go:282] 0 containers: []
	W1213 10:10:25.012149  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:25.012163  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:25.012177  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:25.083061  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:25.083100  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:25.100686  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:25.100713  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:25.172319  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:25.162652    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.163028    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166188    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166902    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.168493    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:25.162652    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.163028    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166188    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166902    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.168493    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:25.172341  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:25.172354  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:25.198195  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:25.198230  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:27.728458  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:27.739147  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:27.739212  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:27.768935  285837 cri.go:89] found id: ""
	I1213 10:10:27.768964  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.768973  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:27.768980  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:27.769069  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:27.793269  285837 cri.go:89] found id: ""
	I1213 10:10:27.793294  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.793303  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:27.793309  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:27.793381  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:27.819458  285837 cri.go:89] found id: ""
	I1213 10:10:27.819481  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.819490  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:27.819496  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:27.819585  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:27.844796  285837 cri.go:89] found id: ""
	I1213 10:10:27.844819  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.844828  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:27.844834  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:27.844892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:27.873605  285837 cri.go:89] found id: ""
	I1213 10:10:27.873629  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.873638  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:27.873644  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:27.873726  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:27.897452  285837 cri.go:89] found id: ""
	I1213 10:10:27.897476  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.897485  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:27.897491  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:27.897548  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:27.923761  285837 cri.go:89] found id: ""
	I1213 10:10:27.923786  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.923796  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:27.923802  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:27.923880  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:27.952811  285837 cri.go:89] found id: ""
	I1213 10:10:27.952875  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.952907  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:27.952949  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:27.952978  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:27.982369  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:27.982444  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:28.039695  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:28.039739  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:28.059367  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:28.059394  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:28.141898  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:28.133880    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.134265    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.135970    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.136432    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.138063    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:28.133880    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.134265    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.135970    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.136432    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.138063    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:28.141920  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:28.141931  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:30.668303  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:30.681191  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:30.681264  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:30.708785  285837 cri.go:89] found id: ""
	I1213 10:10:30.708809  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.708817  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:30.708823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:30.708887  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:30.733895  285837 cri.go:89] found id: ""
	I1213 10:10:30.733918  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.733926  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:30.733932  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:30.733991  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:30.762790  285837 cri.go:89] found id: ""
	I1213 10:10:30.762811  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.762820  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:30.762826  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:30.762891  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:30.786743  285837 cri.go:89] found id: ""
	I1213 10:10:30.786807  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.786829  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:30.786846  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:30.786925  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:30.813249  285837 cri.go:89] found id: ""
	I1213 10:10:30.813272  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.813281  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:30.813288  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:30.813347  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:30.837491  285837 cri.go:89] found id: ""
	I1213 10:10:30.837520  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.837529  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:30.837536  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:30.837596  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:30.862539  285837 cri.go:89] found id: ""
	I1213 10:10:30.862599  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.862622  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:30.862640  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:30.862714  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:30.887350  285837 cri.go:89] found id: ""
	I1213 10:10:30.887371  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.887379  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:30.887388  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:30.887399  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:30.943669  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:30.943701  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:30.957123  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:30.957172  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:31.036468  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:31.016437    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.017222    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.023761    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.024601    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.026097    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:31.016437    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.017222    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.023761    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.024601    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.026097    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:31.036496  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:31.036509  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:31.065951  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:31.065987  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:33.600787  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:33.611280  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:33.611352  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:33.640061  285837 cri.go:89] found id: ""
	I1213 10:10:33.640084  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.640093  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:33.640099  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:33.640159  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:33.664736  285837 cri.go:89] found id: ""
	I1213 10:10:33.664763  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.664772  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:33.664780  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:33.664839  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:33.688858  285837 cri.go:89] found id: ""
	I1213 10:10:33.688882  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.688892  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:33.688898  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:33.688955  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:33.719915  285837 cri.go:89] found id: ""
	I1213 10:10:33.719944  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.719953  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:33.719960  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:33.720015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:33.744897  285837 cri.go:89] found id: ""
	I1213 10:10:33.744927  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.744937  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:33.744943  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:33.745037  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:33.773037  285837 cri.go:89] found id: ""
	I1213 10:10:33.773059  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.773067  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:33.773073  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:33.773134  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:33.797407  285837 cri.go:89] found id: ""
	I1213 10:10:33.797433  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.797443  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:33.797449  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:33.797510  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:33.825833  285837 cri.go:89] found id: ""
	I1213 10:10:33.825859  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.825868  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:33.825877  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:33.825889  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:33.851755  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:33.851788  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:33.884360  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:33.884385  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:33.940045  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:33.940080  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:33.954004  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:33.954039  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:34.035282  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:34.013082    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.013966    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.015748    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.016268    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.020357    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:34.013082    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.013966    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.015748    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.016268    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.020357    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:36.535645  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:36.547382  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:36.547469  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:36.579677  285837 cri.go:89] found id: ""
	I1213 10:10:36.579701  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.579711  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:36.579725  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:36.579802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:36.606029  285837 cri.go:89] found id: ""
	I1213 10:10:36.606058  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.606067  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:36.606073  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:36.606134  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:36.631618  285837 cri.go:89] found id: ""
	I1213 10:10:36.631640  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.631649  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:36.631655  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:36.631712  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:36.656376  285837 cri.go:89] found id: ""
	I1213 10:10:36.656399  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.656407  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:36.656413  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:36.656469  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:36.684348  285837 cri.go:89] found id: ""
	I1213 10:10:36.684369  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.684377  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:36.684383  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:36.684443  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:36.708549  285837 cri.go:89] found id: ""
	I1213 10:10:36.708578  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.708587  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:36.708594  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:36.708653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:36.732630  285837 cri.go:89] found id: ""
	I1213 10:10:36.732659  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.732669  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:36.732677  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:36.732738  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:36.761465  285837 cri.go:89] found id: ""
	I1213 10:10:36.761493  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.761503  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:36.761513  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:36.761524  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:36.774752  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:36.774787  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:36.837540  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:36.829412    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.830021    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.831594    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.832091    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.833636    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:36.829412    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.830021    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.831594    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.832091    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.833636    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:36.837603  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:36.837625  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:36.862806  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:36.862844  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:36.893277  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:36.893302  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:39.453851  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:39.464513  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:39.464595  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:39.488288  285837 cri.go:89] found id: ""
	I1213 10:10:39.488310  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.488319  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:39.488329  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:39.488386  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:39.513054  285837 cri.go:89] found id: ""
	I1213 10:10:39.513077  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.513085  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:39.513091  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:39.513156  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:39.542442  285837 cri.go:89] found id: ""
	I1213 10:10:39.542465  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.542474  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:39.542480  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:39.542535  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:39.575244  285837 cri.go:89] found id: ""
	I1213 10:10:39.575271  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.575280  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:39.575286  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:39.575341  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:39.605371  285837 cri.go:89] found id: ""
	I1213 10:10:39.605402  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.605411  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:39.605417  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:39.605475  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:39.629581  285837 cri.go:89] found id: ""
	I1213 10:10:39.629608  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.629617  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:39.629624  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:39.629680  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:39.657061  285837 cri.go:89] found id: ""
	I1213 10:10:39.657089  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.657098  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:39.657104  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:39.657162  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:39.680815  285837 cri.go:89] found id: ""
	I1213 10:10:39.680880  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.680894  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:39.680904  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:39.680915  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:39.738790  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:39.738822  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:39.751947  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:39.751976  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:39.816341  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:39.808739    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.809296    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.810748    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.811148    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.812544    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:39.808739    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.809296    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.810748    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.811148    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.812544    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:39.816364  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:39.816376  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:39.841100  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:39.841132  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:42.369166  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:42.380009  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:42.380075  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:42.411353  285837 cri.go:89] found id: ""
	I1213 10:10:42.411380  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.411390  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:42.411397  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:42.411455  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:42.436688  285837 cri.go:89] found id: ""
	I1213 10:10:42.436718  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.436728  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:42.436734  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:42.436816  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:42.462185  285837 cri.go:89] found id: ""
	I1213 10:10:42.462211  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.462220  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:42.462226  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:42.462285  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:42.487623  285837 cri.go:89] found id: ""
	I1213 10:10:42.487647  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.487657  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:42.487663  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:42.487722  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:42.513508  285837 cri.go:89] found id: ""
	I1213 10:10:42.513534  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.513543  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:42.513549  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:42.513610  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:42.544400  285837 cri.go:89] found id: ""
	I1213 10:10:42.544424  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.544432  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:42.544439  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:42.544498  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:42.571251  285837 cri.go:89] found id: ""
	I1213 10:10:42.571281  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.571290  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:42.571297  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:42.571353  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:42.608069  285837 cri.go:89] found id: ""
	I1213 10:10:42.608094  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.608103  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:42.608113  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:42.608124  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:42.663779  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:42.663815  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:42.677800  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:42.677839  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:42.742889  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:42.733865    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.734680    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736280    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736823    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.738447    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:42.733865    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.734680    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736280    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736823    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.738447    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:42.742913  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:42.742927  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:42.769648  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:42.769682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:45.299918  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:45.313054  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:45.313153  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:45.339870  285837 cri.go:89] found id: ""
	I1213 10:10:45.339904  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.339914  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:45.339935  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:45.340013  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:45.364702  285837 cri.go:89] found id: ""
	I1213 10:10:45.364736  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.364746  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:45.364752  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:45.364815  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:45.389159  285837 cri.go:89] found id: ""
	I1213 10:10:45.389189  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.389200  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:45.389206  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:45.389286  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:45.413889  285837 cri.go:89] found id: ""
	I1213 10:10:45.413918  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.413927  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:45.413933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:45.414000  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:45.438849  285837 cri.go:89] found id: ""
	I1213 10:10:45.438885  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.438895  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:45.438901  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:45.438962  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:45.469093  285837 cri.go:89] found id: ""
	I1213 10:10:45.469116  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.469124  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:45.469130  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:45.469233  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:45.493365  285837 cri.go:89] found id: ""
	I1213 10:10:45.493391  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.493401  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:45.493408  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:45.493465  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:45.517810  285837 cri.go:89] found id: ""
	I1213 10:10:45.517839  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.517848  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:45.517858  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:45.517870  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:45.532750  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:45.532781  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:45.610253  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:45.601970    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.602367    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604011    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604678    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.606346    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:45.601970    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.602367    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604011    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604678    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.606346    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:45.610276  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:45.610289  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:45.635170  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:45.635201  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:45.662649  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:45.662727  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:48.218853  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:48.230454  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:48.230539  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:48.256210  285837 cri.go:89] found id: ""
	I1213 10:10:48.256235  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.256244  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:48.256250  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:48.256311  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:48.288857  285837 cri.go:89] found id: ""
	I1213 10:10:48.288882  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.288891  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:48.288897  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:48.288952  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:48.317960  285837 cri.go:89] found id: ""
	I1213 10:10:48.317994  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.318020  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:48.318034  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:48.318108  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:48.347646  285837 cri.go:89] found id: ""
	I1213 10:10:48.347724  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.347738  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:48.347746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:48.347815  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:48.372818  285837 cri.go:89] found id: ""
	I1213 10:10:48.372840  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.372849  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:48.372855  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:48.372915  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:48.400208  285837 cri.go:89] found id: ""
	I1213 10:10:48.400281  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.400296  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:48.400304  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:48.400373  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:48.424245  285837 cri.go:89] found id: ""
	I1213 10:10:48.424272  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.424282  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:48.424287  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:48.424345  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:48.450041  285837 cri.go:89] found id: ""
	I1213 10:10:48.450074  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.450083  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:48.450092  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:48.450103  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:48.516704  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:48.507097    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.507702    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509433    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509994    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.511703    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:48.507097    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.507702    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509433    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509994    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.511703    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:48.516726  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:48.516739  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:48.544227  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:48.544262  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:48.581036  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:48.581067  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:48.643405  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:48.643440  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:51.157408  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:51.168232  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:51.168298  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:51.194497  285837 cri.go:89] found id: ""
	I1213 10:10:51.194533  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.194545  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:51.194552  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:51.194619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:51.219079  285837 cri.go:89] found id: ""
	I1213 10:10:51.219099  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.219107  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:51.219112  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:51.219167  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:51.244709  285837 cri.go:89] found id: ""
	I1213 10:10:51.244732  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.244740  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:51.244747  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:51.244806  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:51.284617  285837 cri.go:89] found id: ""
	I1213 10:10:51.284643  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.284651  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:51.284657  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:51.284713  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:51.314124  285837 cri.go:89] found id: ""
	I1213 10:10:51.314152  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.314162  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:51.314170  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:51.314228  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:51.346119  285837 cri.go:89] found id: ""
	I1213 10:10:51.346144  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.346153  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:51.346160  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:51.346218  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:51.371813  285837 cri.go:89] found id: ""
	I1213 10:10:51.371841  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.371850  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:51.371861  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:51.371918  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:51.397126  285837 cri.go:89] found id: ""
	I1213 10:10:51.397150  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.397159  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:51.397174  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:51.397216  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:51.426866  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:51.426894  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:51.483164  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:51.483196  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:51.497003  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:51.497028  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:51.582114  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:51.573287    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.574073    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.575716    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.576298    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.577879    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:51.573287    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.574073    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.575716    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.576298    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.577879    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:51.582138  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:51.582151  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:54.110647  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:54.121581  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:54.121653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:54.145568  285837 cri.go:89] found id: ""
	I1213 10:10:54.145591  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.145600  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:54.145606  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:54.145667  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:54.171162  285837 cri.go:89] found id: ""
	I1213 10:10:54.171186  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.171195  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:54.171202  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:54.171258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:54.196117  285837 cri.go:89] found id: ""
	I1213 10:10:54.196140  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.196148  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:54.196154  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:54.196211  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:54.221183  285837 cri.go:89] found id: ""
	I1213 10:10:54.221226  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.221236  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:54.221243  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:54.221300  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:54.246527  285837 cri.go:89] found id: ""
	I1213 10:10:54.246569  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.246578  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:54.246585  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:54.246648  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:54.273839  285837 cri.go:89] found id: ""
	I1213 10:10:54.273866  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.273875  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:54.273881  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:54.273936  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:54.305443  285837 cri.go:89] found id: ""
	I1213 10:10:54.305468  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.305477  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:54.305483  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:54.305566  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:54.337568  285837 cri.go:89] found id: ""
	I1213 10:10:54.337634  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.337649  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:54.337659  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:54.337671  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:54.394420  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:54.394456  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:54.408137  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:54.408167  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:54.476257  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:54.467629    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.468321    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470154    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470801    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.472389    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:54.467629    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.468321    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470154    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470801    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.472389    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:54.476279  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:54.476294  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:54.501779  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:54.501818  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:57.039708  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:57.051575  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:57.051656  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:57.077147  285837 cri.go:89] found id: ""
	I1213 10:10:57.077171  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.077180  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:57.077186  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:57.077249  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:57.100638  285837 cri.go:89] found id: ""
	I1213 10:10:57.100662  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.100672  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:57.100679  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:57.100736  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:57.124849  285837 cri.go:89] found id: ""
	I1213 10:10:57.124872  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.124880  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:57.124886  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:57.124942  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:57.149947  285837 cri.go:89] found id: ""
	I1213 10:10:57.149970  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.149979  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:57.149985  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:57.150041  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:57.177921  285837 cri.go:89] found id: ""
	I1213 10:10:57.177944  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.177952  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:57.177958  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:57.178015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:57.202761  285837 cri.go:89] found id: ""
	I1213 10:10:57.202785  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.202793  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:57.202799  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:57.202861  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:57.232853  285837 cri.go:89] found id: ""
	I1213 10:10:57.232880  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.232890  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:57.232896  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:57.232958  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:57.257698  285837 cri.go:89] found id: ""
	I1213 10:10:57.257725  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.257734  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:57.257744  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:57.257754  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:57.284012  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:57.284084  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:57.318707  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:57.318744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:57.380534  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:57.380571  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:57.394671  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:57.394704  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:57.463198  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:57.453865    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.454679    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.456385    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.457080    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.458704    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:57.453865    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.454679    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.456385    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.457080    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.458704    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:59.963429  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:59.974005  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:59.974074  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:00.002819  285837 cri.go:89] found id: ""
	I1213 10:11:00.002842  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.002853  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:00.002860  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:00.002927  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:00.094025  285837 cri.go:89] found id: ""
	I1213 10:11:00.094053  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.094064  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:00.094071  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:00.094142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:00.174313  285837 cri.go:89] found id: ""
	I1213 10:11:00.174336  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.174345  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:00.174352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:00.174417  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:00.249900  285837 cri.go:89] found id: ""
	I1213 10:11:00.249939  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.249949  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:00.249968  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:00.250053  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:00.326093  285837 cri.go:89] found id: ""
	I1213 10:11:00.326121  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.326130  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:00.326138  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:00.326207  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:00.398659  285837 cri.go:89] found id: ""
	I1213 10:11:00.398685  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.398695  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:00.398702  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:00.398771  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:00.438080  285837 cri.go:89] found id: ""
	I1213 10:11:00.438106  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.438116  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:00.438123  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:00.438200  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:00.466610  285837 cri.go:89] found id: ""
	I1213 10:11:00.466635  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.466644  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:00.466655  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:00.466668  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:00.524796  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:00.524832  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:00.541430  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:00.541461  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:00.620210  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:00.611464    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.612064    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.613626    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.614181    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.615780    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:00.611464    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.612064    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.613626    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.614181    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.615780    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:00.620234  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:00.620248  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:00.646443  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:00.646481  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:03.175597  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:03.187100  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:03.187169  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:03.213075  285837 cri.go:89] found id: ""
	I1213 10:11:03.213099  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.213108  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:03.213114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:03.213173  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:03.238387  285837 cri.go:89] found id: ""
	I1213 10:11:03.238413  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.238422  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:03.238428  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:03.238485  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:03.263021  285837 cri.go:89] found id: ""
	I1213 10:11:03.263047  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.263057  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:03.263064  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:03.263120  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:03.287967  285837 cri.go:89] found id: ""
	I1213 10:11:03.287990  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.287999  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:03.288005  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:03.288070  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:03.313649  285837 cri.go:89] found id: ""
	I1213 10:11:03.313676  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.313685  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:03.313691  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:03.313782  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:03.341329  285837 cri.go:89] found id: ""
	I1213 10:11:03.341395  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.341410  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:03.341418  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:03.341480  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:03.367350  285837 cri.go:89] found id: ""
	I1213 10:11:03.367376  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.367386  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:03.367392  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:03.367450  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:03.394523  285837 cri.go:89] found id: ""
	I1213 10:11:03.394548  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.394556  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:03.394566  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:03.394579  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:03.408418  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:03.408444  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:03.481932  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:03.473186    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.474065    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.475730    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.476279    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.477971    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:03.473186    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.474065    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.475730    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.476279    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.477971    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:03.481953  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:03.481965  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:03.508165  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:03.508197  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:03.564104  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:03.564135  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:06.137748  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:06.148529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:06.148601  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:06.173118  285837 cri.go:89] found id: ""
	I1213 10:11:06.173142  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.173151  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:06.173164  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:06.173225  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:06.198710  285837 cri.go:89] found id: ""
	I1213 10:11:06.198732  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.198741  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:06.198747  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:06.198802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:06.224139  285837 cri.go:89] found id: ""
	I1213 10:11:06.224163  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.224171  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:06.224183  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:06.224246  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:06.249528  285837 cri.go:89] found id: ""
	I1213 10:11:06.249553  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.249568  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:06.249577  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:06.249636  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:06.283856  285837 cri.go:89] found id: ""
	I1213 10:11:06.283886  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.283894  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:06.283901  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:06.283964  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:06.307922  285837 cri.go:89] found id: ""
	I1213 10:11:06.307947  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.307956  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:06.307963  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:06.308020  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:06.332705  285837 cri.go:89] found id: ""
	I1213 10:11:06.332731  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.332739  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:06.332746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:06.332805  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:06.358646  285837 cri.go:89] found id: ""
	I1213 10:11:06.358672  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.358681  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:06.358691  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:06.358702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:06.414726  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:06.414763  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:06.428830  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:06.428866  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:06.495345  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:06.486296    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.486912    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.488721    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.489058    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.490614    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:06.486296    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.486912    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.488721    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.489058    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.490614    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:06.495373  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:06.495386  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:06.523314  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:06.523359  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:09.076696  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:09.087477  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:09.087569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:09.111658  285837 cri.go:89] found id: ""
	I1213 10:11:09.111681  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.111690  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:09.111696  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:09.111759  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:09.135775  285837 cri.go:89] found id: ""
	I1213 10:11:09.135801  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.135809  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:09.135816  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:09.135872  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:09.165477  285837 cri.go:89] found id: ""
	I1213 10:11:09.165500  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.165514  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:09.165520  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:09.165576  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:09.194399  285837 cri.go:89] found id: ""
	I1213 10:11:09.194421  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.194437  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:09.194446  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:09.194503  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:09.223486  285837 cri.go:89] found id: ""
	I1213 10:11:09.223508  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.223537  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:09.223544  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:09.223603  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:09.252819  285837 cri.go:89] found id: ""
	I1213 10:11:09.252842  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.252851  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:09.252857  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:09.252916  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:09.277570  285837 cri.go:89] found id: ""
	I1213 10:11:09.277641  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.277656  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:09.277666  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:09.277729  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:09.302629  285837 cri.go:89] found id: ""
	I1213 10:11:09.302652  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.302661  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:09.302671  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:09.302682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:09.358773  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:09.358811  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:09.372815  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:09.372842  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:09.441717  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:09.432460    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.433025    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.434726    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.435463    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.437132    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:09.432460    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.433025    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.434726    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.435463    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.437132    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:09.441793  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:09.441822  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:09.466485  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:09.466517  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:11.993817  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:12.018615  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:12.018690  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:12.044911  285837 cri.go:89] found id: ""
	I1213 10:11:12.044934  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.044943  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:12.044949  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:12.045013  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:12.069918  285837 cri.go:89] found id: ""
	I1213 10:11:12.069940  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.069949  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:12.069955  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:12.070018  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:12.094440  285837 cri.go:89] found id: ""
	I1213 10:11:12.094461  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.094470  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:12.094476  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:12.094530  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:12.118079  285837 cri.go:89] found id: ""
	I1213 10:11:12.118099  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.118108  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:12.118114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:12.118169  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:12.145090  285837 cri.go:89] found id: ""
	I1213 10:11:12.145115  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.145125  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:12.145131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:12.145186  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:12.168654  285837 cri.go:89] found id: ""
	I1213 10:11:12.168725  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.168749  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:12.168762  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:12.168820  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:12.192603  285837 cri.go:89] found id: ""
	I1213 10:11:12.192677  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.192704  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:12.192726  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:12.192802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:12.216389  285837 cri.go:89] found id: ""
	I1213 10:11:12.216454  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.216478  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:12.216501  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:12.216517  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:12.273281  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:12.273315  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:12.286866  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:12.286903  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:12.353852  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:12.345393    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.345982    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.347576    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.348061    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.349757    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:12.345393    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.345982    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.347576    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.348061    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.349757    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:12.353884  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:12.353914  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:12.379896  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:12.379931  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:14.910354  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:14.920854  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:14.920922  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:14.946408  285837 cri.go:89] found id: ""
	I1213 10:11:14.946430  285837 logs.go:282] 0 containers: []
	W1213 10:11:14.946439  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:14.946446  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:14.946501  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:14.977293  285837 cri.go:89] found id: ""
	I1213 10:11:14.977322  285837 logs.go:282] 0 containers: []
	W1213 10:11:14.977337  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:14.977343  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:14.977414  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:15.010967  285837 cri.go:89] found id: ""
	I1213 10:11:15.011055  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.011079  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:15.011098  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:15.011201  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:15.050270  285837 cri.go:89] found id: ""
	I1213 10:11:15.050294  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.050314  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:15.050321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:15.050387  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:15.076902  285837 cri.go:89] found id: ""
	I1213 10:11:15.076927  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.076936  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:15.076943  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:15.077003  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:15.106349  285837 cri.go:89] found id: ""
	I1213 10:11:15.106379  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.106389  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:15.106395  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:15.106458  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:15.134472  285837 cri.go:89] found id: ""
	I1213 10:11:15.134497  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.134506  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:15.134512  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:15.134569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:15.161713  285837 cri.go:89] found id: ""
	I1213 10:11:15.161740  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.161750  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:15.161759  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:15.161773  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:15.217480  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:15.217512  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:15.231189  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:15.231217  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:15.304481  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:15.289570    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.290198    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298086    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298782    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.300522    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:15.289570    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.290198    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298086    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298782    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.300522    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:15.304502  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:15.304515  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:15.329819  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:15.329853  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:17.857044  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:17.868755  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:17.868830  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:17.892866  285837 cri.go:89] found id: ""
	I1213 10:11:17.892890  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.892900  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:17.892906  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:17.892969  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:17.918428  285837 cri.go:89] found id: ""
	I1213 10:11:17.918450  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.918459  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:17.918467  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:17.918520  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:17.941924  285837 cri.go:89] found id: ""
	I1213 10:11:17.941945  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.941953  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:17.941959  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:17.942015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:17.966130  285837 cri.go:89] found id: ""
	I1213 10:11:17.966153  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.966162  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:17.966168  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:17.966266  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:17.994412  285837 cri.go:89] found id: ""
	I1213 10:11:17.994437  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.994446  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:17.994452  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:17.994509  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:18.020369  285837 cri.go:89] found id: ""
	I1213 10:11:18.020392  285837 logs.go:282] 0 containers: []
	W1213 10:11:18.020401  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:18.020407  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:18.020485  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:18.047590  285837 cri.go:89] found id: ""
	I1213 10:11:18.047614  285837 logs.go:282] 0 containers: []
	W1213 10:11:18.047623  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:18.047629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:18.047689  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:18.074433  285837 cri.go:89] found id: ""
	I1213 10:11:18.074456  285837 logs.go:282] 0 containers: []
	W1213 10:11:18.074465  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:18.074475  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:18.074487  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:18.101094  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:18.101129  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:18.129666  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:18.129695  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:18.185620  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:18.185652  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:18.199477  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:18.199503  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:18.264408  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:18.255885    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.256623    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258349    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258851    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.260403    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:18.255885    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.256623    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258349    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258851    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.260403    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:20.765401  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:20.778692  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:20.778759  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:20.856776  285837 cri.go:89] found id: ""
	I1213 10:11:20.856798  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.856807  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:20.856813  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:20.856871  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:20.886867  285837 cri.go:89] found id: ""
	I1213 10:11:20.886896  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.886912  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:20.886918  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:20.886992  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:20.915220  285837 cri.go:89] found id: ""
	I1213 10:11:20.915245  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.915254  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:20.915260  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:20.915318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:20.939562  285837 cri.go:89] found id: ""
	I1213 10:11:20.939585  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.939594  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:20.939600  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:20.939667  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:20.964172  285837 cri.go:89] found id: ""
	I1213 10:11:20.964195  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.964204  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:20.964210  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:20.964269  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:20.989184  285837 cri.go:89] found id: ""
	I1213 10:11:20.989206  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.989215  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:20.989221  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:20.989287  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:21.015584  285837 cri.go:89] found id: ""
	I1213 10:11:21.015608  285837 logs.go:282] 0 containers: []
	W1213 10:11:21.015616  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:21.015623  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:21.015692  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:21.041789  285837 cri.go:89] found id: ""
	I1213 10:11:21.041812  285837 logs.go:282] 0 containers: []
	W1213 10:11:21.041820  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:21.041829  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:21.041842  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:21.055424  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:21.055450  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:21.119438  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:21.111338    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.112051    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.113609    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.114084    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.115730    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:21.111338    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.112051    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.113609    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.114084    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.115730    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:21.119456  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:21.119469  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:21.144678  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:21.144713  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:21.177284  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:21.177313  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:23.742410  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:23.752527  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:23.752601  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:23.812957  285837 cri.go:89] found id: ""
	I1213 10:11:23.812979  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.812987  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:23.812994  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:23.813052  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:23.858208  285837 cri.go:89] found id: ""
	I1213 10:11:23.858236  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.858246  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:23.858253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:23.858315  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:23.885293  285837 cri.go:89] found id: ""
	I1213 10:11:23.885318  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.885328  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:23.885334  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:23.885396  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:23.911374  285837 cri.go:89] found id: ""
	I1213 10:11:23.911399  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.911409  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:23.911541  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:23.911621  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:23.940586  285837 cri.go:89] found id: ""
	I1213 10:11:23.940611  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.940620  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:23.940625  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:23.940683  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:23.965387  285837 cri.go:89] found id: ""
	I1213 10:11:23.965413  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.965423  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:23.965430  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:23.965491  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:23.989910  285837 cri.go:89] found id: ""
	I1213 10:11:23.989936  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.989945  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:23.989952  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:23.990009  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:24.016511  285837 cri.go:89] found id: ""
	I1213 10:11:24.016539  285837 logs.go:282] 0 containers: []
	W1213 10:11:24.016548  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:24.016558  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:24.016569  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:24.076500  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:24.076542  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:24.090891  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:24.090920  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:24.158444  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:24.150405    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.151121    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.152642    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.153107    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.154567    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:24.150405    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.151121    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.152642    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.153107    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.154567    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:24.158466  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:24.158478  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:24.184352  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:24.184389  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:26.715866  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:26.726291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:26.726358  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:26.749726  285837 cri.go:89] found id: ""
	I1213 10:11:26.749748  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.749757  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:26.749763  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:26.749820  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:26.798311  285837 cri.go:89] found id: ""
	I1213 10:11:26.798333  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.798341  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:26.798347  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:26.798403  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:26.855482  285837 cri.go:89] found id: ""
	I1213 10:11:26.855506  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.855541  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:26.855548  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:26.855606  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:26.887763  285837 cri.go:89] found id: ""
	I1213 10:11:26.887833  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.887857  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:26.887876  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:26.887963  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:26.913160  285837 cri.go:89] found id: ""
	I1213 10:11:26.913183  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.913192  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:26.913199  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:26.913266  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:26.940887  285837 cri.go:89] found id: ""
	I1213 10:11:26.940965  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.940996  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:26.941004  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:26.941070  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:26.965212  285837 cri.go:89] found id: ""
	I1213 10:11:26.965233  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.965242  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:26.965248  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:26.965313  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:26.989687  285837 cri.go:89] found id: ""
	I1213 10:11:26.989710  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.989718  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:26.989733  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:26.989744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:27.020130  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:27.020156  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:27.075963  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:27.076001  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:27.089421  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:27.089452  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:27.154208  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:27.146008   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.146794   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148444   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148744   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.150216   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:27.146008   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.146794   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148444   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148744   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.150216   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:27.154231  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:27.154243  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:29.679077  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:29.689987  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:29.690113  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:29.719241  285837 cri.go:89] found id: ""
	I1213 10:11:29.719304  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.719318  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:29.719325  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:29.719382  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:29.745413  285837 cri.go:89] found id: ""
	I1213 10:11:29.745511  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.745533  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:29.745541  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:29.745624  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:29.789119  285837 cri.go:89] found id: ""
	I1213 10:11:29.789193  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.789228  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:29.789251  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:29.789362  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:29.865327  285837 cri.go:89] found id: ""
	I1213 10:11:29.865413  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.865429  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:29.865437  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:29.865495  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:29.890183  285837 cri.go:89] found id: ""
	I1213 10:11:29.890260  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.890283  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:29.890301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:29.890397  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:29.919549  285837 cri.go:89] found id: ""
	I1213 10:11:29.919622  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.919646  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:29.919666  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:29.919771  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:29.945219  285837 cri.go:89] found id: ""
	I1213 10:11:29.945248  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.945257  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:29.945264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:29.945364  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:29.973791  285837 cri.go:89] found id: ""
	I1213 10:11:29.973822  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.973832  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:29.973842  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:29.973870  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:30.030470  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:30.030512  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:30.047458  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:30.047559  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:30.123116  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:30.112772   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.113344   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.115682   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.116490   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.118243   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:30.112772   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.113344   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.115682   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.116490   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.118243   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:30.123215  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:30.123250  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:30.149652  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:30.149689  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:32.679599  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:32.690298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:32.690372  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:32.713694  285837 cri.go:89] found id: ""
	I1213 10:11:32.713718  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.713726  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:32.713733  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:32.713790  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:32.738621  285837 cri.go:89] found id: ""
	I1213 10:11:32.738645  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.738654  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:32.738660  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:32.738720  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:32.762830  285837 cri.go:89] found id: ""
	I1213 10:11:32.762855  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.762865  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:32.762871  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:32.762928  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:32.799422  285837 cri.go:89] found id: ""
	I1213 10:11:32.799448  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.799464  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:32.799471  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:32.799543  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:32.856726  285837 cri.go:89] found id: ""
	I1213 10:11:32.856759  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.856768  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:32.856775  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:32.856839  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:32.883319  285837 cri.go:89] found id: ""
	I1213 10:11:32.883346  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.883356  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:32.883362  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:32.883422  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:32.909028  285837 cri.go:89] found id: ""
	I1213 10:11:32.909054  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.909063  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:32.909070  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:32.909127  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:32.938657  285837 cri.go:89] found id: ""
	I1213 10:11:32.938691  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.938701  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:32.938710  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:32.938721  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:32.994400  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:32.994434  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:33.008614  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:33.008653  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:33.076509  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:33.069025   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.069462   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.070984   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.071296   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.072695   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:33.069025   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.069462   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.070984   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.071296   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.072695   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:33.076539  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:33.076553  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:33.101599  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:33.101631  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:35.629072  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:35.639660  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:35.639731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:35.664032  285837 cri.go:89] found id: ""
	I1213 10:11:35.664060  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.664068  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:35.664076  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:35.664130  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:35.692081  285837 cri.go:89] found id: ""
	I1213 10:11:35.692108  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.692118  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:35.692124  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:35.692180  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:35.717152  285837 cri.go:89] found id: ""
	I1213 10:11:35.717177  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.717186  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:35.717192  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:35.717251  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:35.741898  285837 cri.go:89] found id: ""
	I1213 10:11:35.741931  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.741940  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:35.741946  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:35.742013  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:35.766255  285837 cri.go:89] found id: ""
	I1213 10:11:35.766289  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.766298  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:35.766305  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:35.766370  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:35.829052  285837 cri.go:89] found id: ""
	I1213 10:11:35.829093  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.829104  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:35.829111  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:35.829189  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:35.872000  285837 cri.go:89] found id: ""
	I1213 10:11:35.872072  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.872085  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:35.872092  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:35.872162  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:35.897842  285837 cri.go:89] found id: ""
	I1213 10:11:35.897874  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.897883  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:35.897893  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:35.897911  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:35.955605  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:35.955640  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:35.969234  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:35.969262  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:36.035000  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:36.026108   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.026822   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.028473   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.029109   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.030769   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:36.026108   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.026822   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.028473   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.029109   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.030769   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:36.035063  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:36.035083  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:36.061000  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:36.061037  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:38.589308  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:38.599753  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:38.599818  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:38.623379  285837 cri.go:89] found id: ""
	I1213 10:11:38.623400  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.623409  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:38.623418  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:38.623476  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:38.649806  285837 cri.go:89] found id: ""
	I1213 10:11:38.649830  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.649840  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:38.649847  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:38.649908  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:38.674234  285837 cri.go:89] found id: ""
	I1213 10:11:38.674257  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.674266  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:38.674272  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:38.674334  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:38.698759  285837 cri.go:89] found id: ""
	I1213 10:11:38.698780  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.698789  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:38.698795  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:38.698851  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:38.725178  285837 cri.go:89] found id: ""
	I1213 10:11:38.725205  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.725215  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:38.725222  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:38.725281  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:38.766167  285837 cri.go:89] found id: ""
	I1213 10:11:38.766194  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.766204  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:38.766210  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:38.766265  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:38.808982  285837 cri.go:89] found id: ""
	I1213 10:11:38.809009  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.809017  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:38.809023  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:38.809080  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:38.870538  285837 cri.go:89] found id: ""
	I1213 10:11:38.870560  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.870568  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:38.870578  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:38.870589  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:38.928916  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:38.928958  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:38.943274  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:38.943304  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:39.011182  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:38.999196   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:38.999883   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.001952   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.004117   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.005416   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:38.999196   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:38.999883   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.001952   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.004117   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.005416   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:39.011208  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:39.011223  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:39.038343  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:39.038377  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:41.571555  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:41.582245  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:41.582319  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:41.609447  285837 cri.go:89] found id: ""
	I1213 10:11:41.609473  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.609483  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:41.609490  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:41.609546  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:41.637801  285837 cri.go:89] found id: ""
	I1213 10:11:41.637823  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.637832  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:41.637838  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:41.637901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:41.661762  285837 cri.go:89] found id: ""
	I1213 10:11:41.661786  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.661795  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:41.661801  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:41.661865  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:41.685944  285837 cri.go:89] found id: ""
	I1213 10:11:41.685966  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.685981  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:41.685987  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:41.686044  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:41.710847  285837 cri.go:89] found id: ""
	I1213 10:11:41.710874  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.710883  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:41.710889  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:41.710947  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:41.739921  285837 cri.go:89] found id: ""
	I1213 10:11:41.739947  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.739956  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:41.739962  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:41.740021  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:41.764216  285837 cri.go:89] found id: ""
	I1213 10:11:41.764245  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.764254  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:41.764260  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:41.764318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:41.822929  285837 cri.go:89] found id: ""
	I1213 10:11:41.822960  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.822969  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:41.822995  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:41.823012  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:41.860056  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:41.860087  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:41.916192  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:41.916225  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:41.932977  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:41.933051  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:41.996358  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:41.988135   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.988678   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990296   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990680   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.992332   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:41.988135   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.988678   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990296   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990680   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.992332   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:41.996420  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:41.996436  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:44.525380  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:44.536068  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:44.536183  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:44.561444  285837 cri.go:89] found id: ""
	I1213 10:11:44.561476  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.561485  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:44.561491  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:44.561552  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:44.586945  285837 cri.go:89] found id: ""
	I1213 10:11:44.586975  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.586985  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:44.586991  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:44.587057  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:44.612842  285837 cri.go:89] found id: ""
	I1213 10:11:44.612874  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.612885  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:44.612891  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:44.612949  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:44.638444  285837 cri.go:89] found id: ""
	I1213 10:11:44.638472  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.638482  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:44.638489  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:44.638547  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:44.664168  285837 cri.go:89] found id: ""
	I1213 10:11:44.664191  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.664200  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:44.664206  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:44.664264  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:44.693563  285837 cri.go:89] found id: ""
	I1213 10:11:44.693634  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.693659  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:44.693675  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:44.693748  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:44.719349  285837 cri.go:89] found id: ""
	I1213 10:11:44.719376  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.719385  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:44.719391  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:44.719456  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:44.744438  285837 cri.go:89] found id: ""
	I1213 10:11:44.744467  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.744476  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:44.744485  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:44.744498  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:44.815232  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:44.815321  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:44.836304  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:44.836331  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:44.928422  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:44.919472   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.920276   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.921982   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.922615   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.924346   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:44.919472   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.920276   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.921982   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.922615   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.924346   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:44.928443  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:44.928456  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:44.954308  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:44.954348  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:47.482268  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:47.492724  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:47.492804  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:47.517619  285837 cri.go:89] found id: ""
	I1213 10:11:47.517646  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.517655  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:47.517661  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:47.517731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:47.543100  285837 cri.go:89] found id: ""
	I1213 10:11:47.543137  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.543150  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:47.543160  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:47.543223  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:47.573882  285837 cri.go:89] found id: ""
	I1213 10:11:47.573906  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.573915  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:47.573922  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:47.573979  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:47.598649  285837 cri.go:89] found id: ""
	I1213 10:11:47.598676  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.598685  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:47.598692  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:47.598753  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:47.629998  285837 cri.go:89] found id: ""
	I1213 10:11:47.630034  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.630048  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:47.630056  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:47.630135  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:47.658608  285837 cri.go:89] found id: ""
	I1213 10:11:47.658652  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.658662  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:47.658669  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:47.658739  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:47.685293  285837 cri.go:89] found id: ""
	I1213 10:11:47.685337  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.685346  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:47.685352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:47.685419  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:47.711048  285837 cri.go:89] found id: ""
	I1213 10:11:47.711072  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.711081  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:47.711091  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:47.711102  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:47.774561  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:47.774611  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:47.814155  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:47.814228  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:47.909982  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:47.900447   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.900974   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.902686   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.903281   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.905051   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:47.900447   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.900974   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.902686   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.903281   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.905051   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:47.910015  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:47.910028  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:47.938465  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:47.938502  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:50.475972  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:50.488352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:50.488421  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:50.513516  285837 cri.go:89] found id: ""
	I1213 10:11:50.513548  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.513558  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:50.513565  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:50.513619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:50.538473  285837 cri.go:89] found id: ""
	I1213 10:11:50.538498  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.538507  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:50.538513  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:50.538569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:50.562753  285837 cri.go:89] found id: ""
	I1213 10:11:50.562775  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.562784  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:50.562790  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:50.562844  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:50.587561  285837 cri.go:89] found id: ""
	I1213 10:11:50.587587  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.587597  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:50.587603  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:50.587658  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:50.612019  285837 cri.go:89] found id: ""
	I1213 10:11:50.612048  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.612058  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:50.612064  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:50.612123  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:50.636935  285837 cri.go:89] found id: ""
	I1213 10:11:50.636959  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.636967  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:50.636973  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:50.637034  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:50.661053  285837 cri.go:89] found id: ""
	I1213 10:11:50.661076  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.661085  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:50.661091  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:50.661148  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:50.690108  285837 cri.go:89] found id: ""
	I1213 10:11:50.690178  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.690201  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:50.690223  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:50.690262  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:50.748741  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:50.748775  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:50.762458  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:50.762490  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:50.892763  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:50.884204   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.884596   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886208   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886796   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.888413   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:50.884204   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.884596   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886208   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886796   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.888413   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:50.892783  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:50.892796  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:50.918206  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:50.918240  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:53.447378  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:53.457486  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:53.457551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:53.482258  285837 cri.go:89] found id: ""
	I1213 10:11:53.482283  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.482292  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:53.482299  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:53.482357  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:53.511304  285837 cri.go:89] found id: ""
	I1213 10:11:53.511330  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.511339  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:53.511345  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:53.511405  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:53.540251  285837 cri.go:89] found id: ""
	I1213 10:11:53.540277  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.540286  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:53.540291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:53.540349  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:53.565753  285837 cri.go:89] found id: ""
	I1213 10:11:53.565781  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.565791  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:53.565797  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:53.565855  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:53.595124  285837 cri.go:89] found id: ""
	I1213 10:11:53.595151  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.595160  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:53.595166  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:53.595224  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:53.620269  285837 cri.go:89] found id: ""
	I1213 10:11:53.620293  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.620302  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:53.620311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:53.620369  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:53.645281  285837 cri.go:89] found id: ""
	I1213 10:11:53.645309  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.645318  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:53.645325  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:53.645388  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:53.670326  285837 cri.go:89] found id: ""
	I1213 10:11:53.670351  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.670360  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:53.670369  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:53.670386  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:53.726845  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:53.726879  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:53.740167  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:53.740194  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:53.843634  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:53.828906   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830317   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830864   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.835449   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.836015   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:53.828906   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830317   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830864   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.835449   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.836015   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:53.843657  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:53.843669  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:53.870910  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:53.870995  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:56.405428  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:56.415940  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:56.416016  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:56.449974  285837 cri.go:89] found id: ""
	I1213 10:11:56.449996  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.450004  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:56.450010  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:56.450069  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:56.474847  285837 cri.go:89] found id: ""
	I1213 10:11:56.474873  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.474882  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:56.474888  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:56.474946  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:56.504742  285837 cri.go:89] found id: ""
	I1213 10:11:56.504768  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.504777  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:56.504783  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:56.504841  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:56.529471  285837 cri.go:89] found id: ""
	I1213 10:11:56.529493  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.529502  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:56.529509  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:56.529569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:56.553719  285837 cri.go:89] found id: ""
	I1213 10:11:56.553740  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.553749  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:56.553755  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:56.553812  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:56.579917  285837 cri.go:89] found id: ""
	I1213 10:11:56.579942  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.579950  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:56.579957  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:56.580015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:56.603606  285837 cri.go:89] found id: ""
	I1213 10:11:56.603629  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.603638  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:56.603644  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:56.603702  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:56.628438  285837 cri.go:89] found id: ""
	I1213 10:11:56.628460  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.628469  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:56.628479  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:56.628491  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:56.655218  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:56.655245  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:56.711105  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:56.711138  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:56.724564  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:56.724597  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:56.800105  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:56.791391   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.792277   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.793993   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.794273   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.795851   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:56.791391   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.792277   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.793993   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.794273   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.795851   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:56.800126  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:56.800141  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:59.341824  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:59.351965  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:59.352032  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:59.376522  285837 cri.go:89] found id: ""
	I1213 10:11:59.376544  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.376553  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:59.376559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:59.376623  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:59.405422  285837 cri.go:89] found id: ""
	I1213 10:11:59.405497  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.405522  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:59.405537  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:59.405608  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:59.430317  285837 cri.go:89] found id: ""
	I1213 10:11:59.430344  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.430353  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:59.430359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:59.430417  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:59.457827  285837 cri.go:89] found id: ""
	I1213 10:11:59.457854  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.457862  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:59.457868  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:59.457924  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:59.483234  285837 cri.go:89] found id: ""
	I1213 10:11:59.483261  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.483270  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:59.483277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:59.483337  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:59.508270  285837 cri.go:89] found id: ""
	I1213 10:11:59.508296  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.508314  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:59.508322  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:59.508379  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:59.532819  285837 cri.go:89] found id: ""
	I1213 10:11:59.532842  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.532851  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:59.532857  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:59.532913  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:59.556482  285837 cri.go:89] found id: ""
	I1213 10:11:59.556508  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.556517  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:59.556527  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:59.556540  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:59.611281  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:59.611315  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:59.624666  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:59.624694  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:59.690085  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:59.681680   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.682124   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.683696   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.684084   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.685716   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:59.681680   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.682124   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.683696   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.684084   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.685716   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:59.690108  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:59.690122  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:59.715666  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:59.715703  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:02.245206  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:02.256067  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:02.256147  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:02.280777  285837 cri.go:89] found id: ""
	I1213 10:12:02.280801  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.280809  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:02.280821  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:02.280885  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:02.305877  285837 cri.go:89] found id: ""
	I1213 10:12:02.305905  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.305914  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:02.305920  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:02.305988  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:02.330860  285837 cri.go:89] found id: ""
	I1213 10:12:02.330886  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.330894  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:02.330900  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:02.330965  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:02.356613  285837 cri.go:89] found id: ""
	I1213 10:12:02.356649  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.356659  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:02.356665  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:02.356746  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:02.388158  285837 cri.go:89] found id: ""
	I1213 10:12:02.388181  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.388190  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:02.388196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:02.388256  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:02.415431  285837 cri.go:89] found id: ""
	I1213 10:12:02.415454  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.415462  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:02.415468  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:02.415538  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:02.442554  285837 cri.go:89] found id: ""
	I1213 10:12:02.442580  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.442589  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:02.442595  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:02.442654  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:02.468134  285837 cri.go:89] found id: ""
	I1213 10:12:02.468159  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.468167  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:02.468177  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:02.468188  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:02.526799  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:02.526832  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:02.542508  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:02.542533  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:02.616614  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:02.606681   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.607231   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.609505   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.610854   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.611410   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:02.606681   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.607231   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.609505   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.610854   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.611410   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:02.616637  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:02.616650  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:02.641382  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:02.641415  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:05.169197  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:05.179948  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:05.180017  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:05.205082  285837 cri.go:89] found id: ""
	I1213 10:12:05.205105  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.205113  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:05.205119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:05.205176  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:05.234272  285837 cri.go:89] found id: ""
	I1213 10:12:05.234295  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.234305  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:05.234311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:05.234369  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:05.259024  285837 cri.go:89] found id: ""
	I1213 10:12:05.259047  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.259055  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:05.259062  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:05.259120  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:05.287223  285837 cri.go:89] found id: ""
	I1213 10:12:05.287249  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.287257  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:05.287264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:05.287323  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:05.311741  285837 cri.go:89] found id: ""
	I1213 10:12:05.311831  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.311859  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:05.311904  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:05.312016  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:05.337137  285837 cri.go:89] found id: ""
	I1213 10:12:05.337161  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.337170  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:05.337176  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:05.337232  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:05.361938  285837 cri.go:89] found id: ""
	I1213 10:12:05.361967  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.361976  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:05.361982  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:05.362063  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:05.387423  285837 cri.go:89] found id: ""
	I1213 10:12:05.387460  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.387469  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:05.387478  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:05.387489  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:05.446385  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:05.446423  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:05.460052  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:05.460075  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:05.534925  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:05.526612   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.527034   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.528674   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.529348   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.530958   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:05.526612   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.527034   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.528674   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.529348   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.530958   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:05.534954  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:05.534969  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:05.561237  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:05.561278  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:08.090523  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:08.103723  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:08.103793  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:08.130437  285837 cri.go:89] found id: ""
	I1213 10:12:08.130464  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.130473  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:08.130479  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:08.130536  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:08.158259  285837 cri.go:89] found id: ""
	I1213 10:12:08.158286  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.158295  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:08.158301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:08.158359  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:08.183457  285837 cri.go:89] found id: ""
	I1213 10:12:08.183484  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.183493  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:08.183499  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:08.183589  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:08.207480  285837 cri.go:89] found id: ""
	I1213 10:12:08.207507  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.207613  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:08.207620  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:08.207681  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:08.231959  285837 cri.go:89] found id: ""
	I1213 10:12:08.232037  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.232053  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:08.232061  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:08.232131  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:08.255921  285837 cri.go:89] found id: ""
	I1213 10:12:08.255986  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.256003  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:08.256010  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:08.256074  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:08.280187  285837 cri.go:89] found id: ""
	I1213 10:12:08.280254  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.280269  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:08.280276  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:08.280332  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:08.308900  285837 cri.go:89] found id: ""
	I1213 10:12:08.308974  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.308997  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:08.309014  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:08.309029  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:08.322959  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:08.322986  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:08.387674  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:08.378725   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.379335   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.380972   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.381487   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.383076   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:08.378725   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.379335   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.380972   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.381487   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.383076   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:08.387701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:08.387715  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:08.413378  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:08.413414  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:08.444856  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:08.444888  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:11.000292  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:11.012216  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:11.012287  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:11.063803  285837 cri.go:89] found id: ""
	I1213 10:12:11.063829  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.063838  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:11.063845  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:11.063910  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:11.103072  285837 cri.go:89] found id: ""
	I1213 10:12:11.103099  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.103109  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:11.103115  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:11.103171  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:11.138581  285837 cri.go:89] found id: ""
	I1213 10:12:11.138606  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.138614  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:11.138631  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:11.138686  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:11.163663  285837 cri.go:89] found id: ""
	I1213 10:12:11.163735  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.163760  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:11.163779  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:11.163862  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:11.188635  285837 cri.go:89] found id: ""
	I1213 10:12:11.188701  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.188716  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:11.188722  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:11.188779  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:11.217597  285837 cri.go:89] found id: ""
	I1213 10:12:11.217620  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.217628  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:11.217634  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:11.217690  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:11.241986  285837 cri.go:89] found id: ""
	I1213 10:12:11.242009  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.242017  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:11.242023  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:11.242078  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:11.266556  285837 cri.go:89] found id: ""
	I1213 10:12:11.266578  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.266586  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:11.266596  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:11.266607  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:11.298567  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:11.298592  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:11.354117  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:11.354151  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:11.367112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:11.367187  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:11.430754  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:11.422695   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.423276   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.424837   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.425327   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.426819   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:11.422695   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.423276   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.424837   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.425327   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.426819   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:11.430832  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:11.430859  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:13.957251  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:13.968979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:13.969058  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:13.994304  285837 cri.go:89] found id: ""
	I1213 10:12:13.994326  285837 logs.go:282] 0 containers: []
	W1213 10:12:13.994334  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:13.994341  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:13.994396  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:14.032552  285837 cri.go:89] found id: ""
	I1213 10:12:14.032584  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.032593  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:14.032600  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:14.032663  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:14.104797  285837 cri.go:89] found id: ""
	I1213 10:12:14.104823  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.104833  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:14.104839  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:14.104901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:14.130796  285837 cri.go:89] found id: ""
	I1213 10:12:14.130821  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.130831  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:14.130837  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:14.130892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:14.157587  285837 cri.go:89] found id: ""
	I1213 10:12:14.157616  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.157625  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:14.157631  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:14.157689  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:14.183166  285837 cri.go:89] found id: ""
	I1213 10:12:14.183191  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.183199  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:14.183205  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:14.183271  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:14.207844  285837 cri.go:89] found id: ""
	I1213 10:12:14.207871  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.207880  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:14.207886  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:14.207943  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:14.232398  285837 cri.go:89] found id: ""
	I1213 10:12:14.232420  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.232429  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:14.232438  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:14.232450  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:14.263838  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:14.263869  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:14.322835  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:14.322870  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:14.336577  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:14.336609  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:14.404961  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:14.396535   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.397107   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.398835   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.399389   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.400964   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:14.396535   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.397107   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.398835   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.399389   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.400964   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:14.405007  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:14.405047  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:16.930423  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:16.941126  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:16.941197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:16.968990  285837 cri.go:89] found id: ""
	I1213 10:12:16.969013  285837 logs.go:282] 0 containers: []
	W1213 10:12:16.969023  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:16.969029  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:16.969093  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:16.994277  285837 cri.go:89] found id: ""
	I1213 10:12:16.994298  285837 logs.go:282] 0 containers: []
	W1213 10:12:16.994307  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:16.994319  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:16.994374  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:17.052160  285837 cri.go:89] found id: ""
	I1213 10:12:17.052187  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.052196  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:17.052202  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:17.052260  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:17.112056  285837 cri.go:89] found id: ""
	I1213 10:12:17.112122  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.112136  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:17.112142  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:17.112201  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:17.137264  285837 cri.go:89] found id: ""
	I1213 10:12:17.137287  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.137295  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:17.137301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:17.137356  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:17.161759  285837 cri.go:89] found id: ""
	I1213 10:12:17.161780  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.161802  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:17.161808  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:17.161864  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:17.187256  285837 cri.go:89] found id: ""
	I1213 10:12:17.187288  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.187296  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:17.187302  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:17.187372  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:17.213316  285837 cri.go:89] found id: ""
	I1213 10:12:17.213380  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.213400  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:17.213413  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:17.213424  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:17.241644  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:17.241674  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:17.298584  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:17.298617  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:17.313303  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:17.313331  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:17.387719  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:17.378154   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.379329   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.380946   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.381429   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.382999   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:17.378154   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.379329   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.380946   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.381429   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.382999   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:17.387742  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:17.387755  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:19.919282  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:19.929646  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:19.929711  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:19.961717  285837 cri.go:89] found id: ""
	I1213 10:12:19.961739  285837 logs.go:282] 0 containers: []
	W1213 10:12:19.961748  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:19.961754  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:19.961811  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:19.986281  285837 cri.go:89] found id: ""
	I1213 10:12:19.986306  285837 logs.go:282] 0 containers: []
	W1213 10:12:19.986315  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:19.986321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:19.986375  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:20.035442  285837 cri.go:89] found id: ""
	I1213 10:12:20.035468  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.035478  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:20.035484  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:20.035574  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:20.086605  285837 cri.go:89] found id: ""
	I1213 10:12:20.086627  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.086635  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:20.086642  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:20.086698  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:20.121043  285837 cri.go:89] found id: ""
	I1213 10:12:20.121065  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.121073  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:20.121079  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:20.121136  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:20.148016  285837 cri.go:89] found id: ""
	I1213 10:12:20.148083  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.148105  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:20.148124  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:20.148209  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:20.175168  285837 cri.go:89] found id: ""
	I1213 10:12:20.175234  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.175257  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:20.175276  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:20.175363  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:20.206568  285837 cri.go:89] found id: ""
	I1213 10:12:20.206590  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.206599  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:20.206608  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:20.206619  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:20.234244  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:20.234308  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:20.290937  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:20.290972  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:20.304498  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:20.304527  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:20.367763  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:20.358899   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.359439   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361173   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361714   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.363335   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:20.358899   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.359439   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361173   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361714   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.363335   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:20.367830  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:20.367849  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:22.894711  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:22.905901  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:22.905969  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:22.936437  285837 cri.go:89] found id: ""
	I1213 10:12:22.936460  285837 logs.go:282] 0 containers: []
	W1213 10:12:22.936468  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:22.936474  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:22.936533  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:22.961367  285837 cri.go:89] found id: ""
	I1213 10:12:22.961390  285837 logs.go:282] 0 containers: []
	W1213 10:12:22.961416  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:22.961425  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:22.961484  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:22.984924  285837 cri.go:89] found id: ""
	I1213 10:12:22.984949  285837 logs.go:282] 0 containers: []
	W1213 10:12:22.984958  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:22.984964  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:22.985046  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:23.012110  285837 cri.go:89] found id: ""
	I1213 10:12:23.012175  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.012191  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:23.012198  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:23.012258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:23.053789  285837 cri.go:89] found id: ""
	I1213 10:12:23.053816  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.053825  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:23.053831  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:23.053888  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:23.102082  285837 cri.go:89] found id: ""
	I1213 10:12:23.102104  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.102112  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:23.102118  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:23.102173  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:23.139793  285837 cri.go:89] found id: ""
	I1213 10:12:23.139820  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.139830  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:23.139836  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:23.139892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:23.163400  285837 cri.go:89] found id: ""
	I1213 10:12:23.163426  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.163436  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:23.163451  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:23.163464  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:23.227709  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:23.227744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:23.241604  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:23.241631  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:23.305636  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:23.297583   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.298334   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.299957   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.300270   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.301534   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:23.297583   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.298334   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.299957   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.300270   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.301534   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:23.305670  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:23.305683  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:23.331847  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:23.331879  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:25.858551  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:25.871752  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:25.871822  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:25.897476  285837 cri.go:89] found id: ""
	I1213 10:12:25.897527  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.897536  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:25.897543  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:25.897600  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:25.925782  285837 cri.go:89] found id: ""
	I1213 10:12:25.925807  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.925817  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:25.925823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:25.925906  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:25.949723  285837 cri.go:89] found id: ""
	I1213 10:12:25.949750  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.949760  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:25.949766  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:25.949842  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:25.973991  285837 cri.go:89] found id: ""
	I1213 10:12:25.974016  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.974025  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:25.974032  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:25.974107  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:26.001033  285837 cri.go:89] found id: ""
	I1213 10:12:26.001056  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.001064  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:26.001070  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:26.001144  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:26.077273  285837 cri.go:89] found id: ""
	I1213 10:12:26.077300  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.077309  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:26.077316  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:26.077397  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:26.122203  285837 cri.go:89] found id: ""
	I1213 10:12:26.122230  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.122240  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:26.122246  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:26.122346  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:26.147712  285837 cri.go:89] found id: ""
	I1213 10:12:26.147736  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.147745  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:26.147781  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:26.147799  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:26.203487  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:26.203528  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:26.217213  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:26.217246  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:26.284727  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:26.276312   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.277260   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.278746   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.279123   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.280769   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:26.276312   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.277260   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.278746   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.279123   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.280769   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:26.284751  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:26.284763  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:26.312716  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:26.312773  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:28.841875  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:28.852491  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:28.852562  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:28.881629  285837 cri.go:89] found id: ""
	I1213 10:12:28.881653  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.881662  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:28.881669  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:28.881728  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:28.906270  285837 cri.go:89] found id: ""
	I1213 10:12:28.906296  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.906306  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:28.906312  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:28.906370  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:28.931578  285837 cri.go:89] found id: ""
	I1213 10:12:28.931599  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.931607  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:28.931612  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:28.931666  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:28.957311  285837 cri.go:89] found id: ""
	I1213 10:12:28.957334  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.957343  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:28.957349  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:28.957406  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:28.981753  285837 cri.go:89] found id: ""
	I1213 10:12:28.981778  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.981787  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:28.981794  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:28.981849  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:29.006917  285837 cri.go:89] found id: ""
	I1213 10:12:29.006945  285837 logs.go:282] 0 containers: []
	W1213 10:12:29.006955  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:29.006962  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:29.007029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:29.066909  285837 cri.go:89] found id: ""
	I1213 10:12:29.066935  285837 logs.go:282] 0 containers: []
	W1213 10:12:29.066944  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:29.066950  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:29.067008  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:29.105599  285837 cri.go:89] found id: ""
	I1213 10:12:29.105625  285837 logs.go:282] 0 containers: []
	W1213 10:12:29.105633  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:29.105642  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:29.105652  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:29.130961  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:29.131003  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:29.157785  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:29.157819  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:29.213436  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:29.213472  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:29.227454  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:29.227485  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:29.298087  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:29.289850   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.290315   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.291761   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.292182   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.293636   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:29.289850   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.290315   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.291761   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.292182   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.293636   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:31.798509  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:31.809145  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:31.809221  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:31.833247  285837 cri.go:89] found id: ""
	I1213 10:12:31.833272  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.833281  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:31.833290  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:31.833348  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:31.861756  285837 cri.go:89] found id: ""
	I1213 10:12:31.861779  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.861789  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:31.861795  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:31.861851  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:31.885473  285837 cri.go:89] found id: ""
	I1213 10:12:31.885496  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.885506  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:31.885512  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:31.885566  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:31.908602  285837 cri.go:89] found id: ""
	I1213 10:12:31.908626  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.908634  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:31.908640  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:31.908695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:31.933964  285837 cri.go:89] found id: ""
	I1213 10:12:31.933990  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.933999  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:31.934005  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:31.934063  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:31.962393  285837 cri.go:89] found id: ""
	I1213 10:12:31.962416  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.962424  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:31.962431  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:31.962490  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:31.986650  285837 cri.go:89] found id: ""
	I1213 10:12:31.986676  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.986685  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:31.986692  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:31.986749  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:32.017192  285837 cri.go:89] found id: ""
	I1213 10:12:32.017220  285837 logs.go:282] 0 containers: []
	W1213 10:12:32.017229  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:32.017239  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:32.017252  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:32.035285  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:32.035316  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:32.145875  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:32.134854   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.135586   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.137228   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.140039   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.141704   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:32.134854   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.135586   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.137228   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.140039   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.141704   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:32.145896  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:32.145909  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:32.172371  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:32.172409  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:32.202803  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:32.202833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:34.759246  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:34.770746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:34.770823  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:34.798561  285837 cri.go:89] found id: ""
	I1213 10:12:34.798585  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.798594  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:34.798601  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:34.798664  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:34.824521  285837 cri.go:89] found id: ""
	I1213 10:12:34.824544  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.824553  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:34.824559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:34.824616  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:34.848643  285837 cri.go:89] found id: ""
	I1213 10:12:34.848670  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.848680  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:34.848687  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:34.848746  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:34.874242  285837 cri.go:89] found id: ""
	I1213 10:12:34.874263  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.874271  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:34.874277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:34.874331  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:34.898270  285837 cri.go:89] found id: ""
	I1213 10:12:34.898298  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.898308  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:34.898314  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:34.898374  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:34.922469  285837 cri.go:89] found id: ""
	I1213 10:12:34.922492  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.922502  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:34.922508  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:34.922565  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:34.949223  285837 cri.go:89] found id: ""
	I1213 10:12:34.949250  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.949259  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:34.949266  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:34.949320  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:34.977644  285837 cri.go:89] found id: ""
	I1213 10:12:34.977675  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.977685  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:34.977696  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:34.977707  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:35.038624  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:35.038662  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:35.079394  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:35.079475  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:35.160019  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:35.151819   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.152538   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154163   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154458   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.155960   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:35.151819   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.152538   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154163   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154458   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.155960   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:35.160066  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:35.160078  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:35.186026  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:35.186058  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:37.713450  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:37.724509  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:37.724585  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:37.748173  285837 cri.go:89] found id: ""
	I1213 10:12:37.748197  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.748206  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:37.748213  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:37.748274  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:37.772262  285837 cri.go:89] found id: ""
	I1213 10:12:37.772285  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.772294  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:37.772312  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:37.772371  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:37.797053  285837 cri.go:89] found id: ""
	I1213 10:12:37.797077  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.797086  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:37.797093  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:37.797151  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:37.821445  285837 cri.go:89] found id: ""
	I1213 10:12:37.821468  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.821477  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:37.821484  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:37.821538  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:37.848175  285837 cri.go:89] found id: ""
	I1213 10:12:37.848199  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.848208  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:37.848214  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:37.848272  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:37.882751  285837 cri.go:89] found id: ""
	I1213 10:12:37.882774  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.882784  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:37.882789  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:37.882847  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:37.907236  285837 cri.go:89] found id: ""
	I1213 10:12:37.907262  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.907271  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:37.907277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:37.907334  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:37.931030  285837 cri.go:89] found id: ""
	I1213 10:12:37.931053  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.931061  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:37.931070  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:37.931082  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:37.944201  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:37.944228  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:38.014013  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:38.001007   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.001921   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.006866   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.007610   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.009335   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:38.001007   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.001921   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.006866   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.007610   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.009335   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:38.014037  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:38.014051  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:38.050241  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:38.050336  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:38.123205  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:38.123240  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:40.686197  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:40.696710  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:40.696797  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:40.726004  285837 cri.go:89] found id: ""
	I1213 10:12:40.726031  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.726040  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:40.726046  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:40.726104  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:40.751506  285837 cri.go:89] found id: ""
	I1213 10:12:40.751558  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.751567  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:40.751573  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:40.751637  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:40.777206  285837 cri.go:89] found id: ""
	I1213 10:12:40.777232  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.777241  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:40.777247  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:40.777307  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:40.806234  285837 cri.go:89] found id: ""
	I1213 10:12:40.806256  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.806264  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:40.806270  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:40.806326  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:40.835873  285837 cri.go:89] found id: ""
	I1213 10:12:40.835898  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.835907  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:40.835913  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:40.835969  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:40.861792  285837 cri.go:89] found id: ""
	I1213 10:12:40.861821  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.861830  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:40.861836  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:40.861897  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:40.887384  285837 cri.go:89] found id: ""
	I1213 10:12:40.887409  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.887418  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:40.887424  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:40.887482  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:40.918474  285837 cri.go:89] found id: ""
	I1213 10:12:40.918499  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.918508  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:40.918518  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:40.918529  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:40.974634  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:40.974669  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:40.988450  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:40.988481  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:41.102570  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:41.091923   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.092616   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094183   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094723   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.096608   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:41.091923   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.092616   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094183   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094723   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.096608   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:41.102639  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:41.102664  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:41.132124  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:41.132159  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:43.660524  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:43.671119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:43.671190  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:43.696313  285837 cri.go:89] found id: ""
	I1213 10:12:43.696343  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.696356  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:43.696364  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:43.696422  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:43.720831  285837 cri.go:89] found id: ""
	I1213 10:12:43.720856  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.720865  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:43.720871  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:43.720930  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:43.745280  285837 cri.go:89] found id: ""
	I1213 10:12:43.745305  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.745314  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:43.745321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:43.745382  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:43.771809  285837 cri.go:89] found id: ""
	I1213 10:12:43.771832  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.771842  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:43.771848  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:43.771919  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:43.795691  285837 cri.go:89] found id: ""
	I1213 10:12:43.795715  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.795725  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:43.795731  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:43.795789  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:43.821222  285837 cri.go:89] found id: ""
	I1213 10:12:43.821246  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.821254  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:43.821261  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:43.821316  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:43.849405  285837 cri.go:89] found id: ""
	I1213 10:12:43.849428  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.849437  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:43.849450  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:43.849515  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:43.874124  285837 cri.go:89] found id: ""
	I1213 10:12:43.874150  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.874159  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:43.874167  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:43.874178  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:43.938106  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:43.929845   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.930320   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932022   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932327   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.933807   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:43.929845   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.930320   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932022   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932327   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.933807   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:43.938129  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:43.938141  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:43.963803  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:43.963838  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:43.994003  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:43.994030  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:44.069701  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:44.069786  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:46.587357  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:46.597851  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:46.597931  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:46.622018  285837 cri.go:89] found id: ""
	I1213 10:12:46.622044  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.622054  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:46.622060  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:46.622119  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:46.647494  285837 cri.go:89] found id: ""
	I1213 10:12:46.647537  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.647547  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:46.647553  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:46.647612  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:46.673199  285837 cri.go:89] found id: ""
	I1213 10:12:46.673223  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.673237  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:46.673243  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:46.673302  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:46.702715  285837 cri.go:89] found id: ""
	I1213 10:12:46.702777  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.702799  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:46.702818  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:46.702888  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:46.732013  285837 cri.go:89] found id: ""
	I1213 10:12:46.732036  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.732044  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:46.732049  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:46.732111  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:46.755882  285837 cri.go:89] found id: ""
	I1213 10:12:46.755907  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.755925  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:46.755933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:46.755993  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:46.780993  285837 cri.go:89] found id: ""
	I1213 10:12:46.781016  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.781025  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:46.781031  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:46.781094  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:46.806179  285837 cri.go:89] found id: ""
	I1213 10:12:46.806255  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.806280  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:46.806305  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:46.806342  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:46.863518  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:46.863553  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:46.877399  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:46.877428  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:46.946626  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:46.939032   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.939507   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941182   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941602   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.942800   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:46.939032   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.939507   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941182   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941602   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.942800   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:46.946696  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:46.946739  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:46.972274  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:46.972306  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:49.510021  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:49.520415  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:49.520489  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:49.544492  285837 cri.go:89] found id: ""
	I1213 10:12:49.544515  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.544524  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:49.544531  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:49.544595  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:49.574538  285837 cri.go:89] found id: ""
	I1213 10:12:49.574564  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.574573  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:49.574593  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:49.574659  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:49.603237  285837 cri.go:89] found id: ""
	I1213 10:12:49.603267  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.603277  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:49.603283  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:49.603339  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:49.627482  285837 cri.go:89] found id: ""
	I1213 10:12:49.627508  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.627547  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:49.627555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:49.627635  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:49.652503  285837 cri.go:89] found id: ""
	I1213 10:12:49.652532  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.652541  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:49.652547  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:49.652620  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:49.677443  285837 cri.go:89] found id: ""
	I1213 10:12:49.677474  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.677483  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:49.677490  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:49.677551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:49.702698  285837 cri.go:89] found id: ""
	I1213 10:12:49.702723  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.702733  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:49.702750  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:49.702813  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:49.731706  285837 cri.go:89] found id: ""
	I1213 10:12:49.731727  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.731735  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:49.731750  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:49.731762  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:49.787702  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:49.787741  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:49.801570  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:49.801602  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:49.870136  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:49.861042   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.862455   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.863332   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.864338   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.865026   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:49.861042   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.862455   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.863332   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.864338   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.865026   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:49.870158  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:49.870171  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:49.896174  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:49.896211  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:52.425030  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:52.438702  285837 out.go:203] 
	W1213 10:12:52.441528  285837 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 10:12:52.441562  285837 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 10:12:52.441572  285837 out.go:285] * Related issues:
	W1213 10:12:52.441583  285837 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 10:12:52.441596  285837 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 10:12:52.444462  285837 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139688152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139757700Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139854054Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139930494Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139999639Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140070369Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140128880Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140186801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140255347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140343226Z" level=info msg="Connect containerd service"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140691374Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.141400233Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.153338815Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.153402373Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.153439969Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.153482546Z" level=info msg="Start recovering state"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194202999Z" level=info msg="Start event monitor"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194399260Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194485793Z" level=info msg="Start streaming server"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194562487Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194779253Z" level=info msg="runtime interface starting up..."
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194850729Z" level=info msg="starting plugins..."
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194929983Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:06:50 newest-cni-987495 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.196601776Z" level=info msg="containerd successfully booted in 0.081602s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:13:01.949663   13817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:13:01.950079   13817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:13:01.951756   13817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:13:01.952291   13817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:13:01.953837   13817 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 10:13:01 up  1:55,  0 user,  load average: 0.70, 0.64, 1.07
	Linux newest-cni-987495 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:12:57 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:12:57 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:12:58 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:12:59 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:12:59 newest-cni-987495 kubelet[13665]: E1213 10:12:59.292669   13665 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:12:59 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:12:59 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:13:00 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 13 10:13:00 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:00 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:00 newest-cni-987495 kubelet[13702]: E1213 10:13:00.291575   13702 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:13:00 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:13:00 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:13:01 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 13 10:13:01 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:01 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:01 newest-cni-987495 kubelet[13723]: E1213 10:13:01.093690   13723 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:13:01 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:13:01 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:13:01 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 13 10:13:01 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:01 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:01 newest-cni-987495 kubelet[13789]: E1213 10:13:01.840259   13789 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:13:01 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:13:01 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495: exit status 2 (353.367515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-987495" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-987495
helpers_test.go:244: (dbg) docker inspect newest-cni-987495:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac",
	        "Created": "2025-12-13T09:56:44.68064601Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285966,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:06:44.630226292Z",
	            "FinishedAt": "2025-12-13T10:06:43.28882954Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/hosts",
	        "LogPath": "/var/lib/docker/containers/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac/5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac-json.log",
	        "Name": "/newest-cni-987495",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-987495:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-987495",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d45a23b08cd4461a9690ca5b442ba891297c6574ffcbf36572a5f87a2ae59ac",
	                "LowerDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6717247fcfdbcf85ed6b4e9d2ae0dfad9ce92a7c46f23e71e19ddaf7fe44959a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-987495",
	                "Source": "/var/lib/docker/volumes/newest-cni-987495/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-987495",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-987495",
	                "name.minikube.sigs.k8s.io": "newest-cni-987495",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5075c185fe763e8b4bf25c5fa6e0906d897dd0a6aa9fa09a4f6785fde91f40b",
	            "SandboxKey": "/var/run/docker/netns/d5075c185fe7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-987495": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:1b:64:66:e5:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1cc05b29a6a537694a06e8a33e1431f6867104db51c8eb4299d9f9f07c01c4",
	                    "EndpointID": "e82ad5225efe9fbd3a246c4b71f89967b2a2d9edc684052e26b72ce55599a589",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-987495",
	                        "5d45a23b08cd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-987495 -n newest-cni-987495
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-987495 -n newest-cni-987495: exit status 2 (343.955281ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-987495 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-987495 logs -n 25: (1.618328011s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p embed-certs-238987 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ delete  │ -p embed-certs-238987                                                                                                                                                                                                                                      │ embed-certs-238987           │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:53 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:53 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-544967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ stop    │ -p default-k8s-diff-port-544967 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-544967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:55 UTC │
	│ start   │ -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:55 UTC │ 13 Dec 25 09:56 UTC │
	│ image   │ default-k8s-diff-port-544967 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ pause   │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ unpause │ -p default-k8s-diff-port-544967 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ delete  │ -p default-k8s-diff-port-544967                                                                                                                                                                                                                            │ default-k8s-diff-port-544967 │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │ 13 Dec 25 09:56 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 09:56 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-328069 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:00 UTC │                     │
	│ stop    │ -p no-preload-328069 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │ 13 Dec 25 10:02 UTC │
	│ addons  │ enable dashboard -p no-preload-328069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │ 13 Dec 25 10:02 UTC │
	│ start   │ -p no-preload-328069 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-328069            │ jenkins │ v1.37.0 │ 13 Dec 25 10:02 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-987495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:05 UTC │                     │
	│ stop    │ -p newest-cni-987495 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:06 UTC │ 13 Dec 25 10:06 UTC │
	│ addons  │ enable dashboard -p newest-cni-987495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:06 UTC │ 13 Dec 25 10:06 UTC │
	│ start   │ -p newest-cni-987495 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:06 UTC │                     │
	│ image   │ newest-cni-987495 image list --format=json                                                                                                                                                                                                                 │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │ 13 Dec 25 10:12 UTC │
	│ pause   │ -p newest-cni-987495 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │ 13 Dec 25 10:12 UTC │
	│ unpause │ -p newest-cni-987495 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-987495            │ jenkins │ v1.37.0 │ 13 Dec 25 10:12 UTC │ 13 Dec 25 10:12 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:06:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:06:44.358606  285837 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:06:44.358774  285837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:06:44.358804  285837 out.go:374] Setting ErrFile to fd 2...
	I1213 10:06:44.358810  285837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:06:44.359110  285837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 10:06:44.359584  285837 out.go:368] Setting JSON to false
	I1213 10:06:44.360505  285837 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6557,"bootTime":1765613848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:06:44.360574  285837 start.go:143] virtualization:  
	I1213 10:06:44.365480  285837 out.go:179] * [newest-cni-987495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:06:44.368718  285837 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:06:44.368777  285837 notify.go:221] Checking for updates...
	I1213 10:06:44.374649  285837 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:06:44.377632  285837 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:44.380625  285837 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 10:06:44.383607  285837 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:06:44.386498  285837 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:06:44.389949  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:44.390563  285837 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:06:44.426169  285837 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:06:44.426412  285837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:06:44.479541  285837 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:06:44.469338758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:06:44.479654  285837 docker.go:319] overlay module found
	I1213 10:06:44.482815  285837 out.go:179] * Using the docker driver based on existing profile
	I1213 10:06:44.485692  285837 start.go:309] selected driver: docker
	I1213 10:06:44.485711  285837 start.go:927] validating driver "docker" against &{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:44.485823  285837 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:06:44.486552  285837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:06:44.545256  285837 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:06:44.535101087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:06:44.545615  285837 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 10:06:44.545650  285837 cni.go:84] Creating CNI manager for ""
	I1213 10:06:44.545706  285837 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:06:44.545747  285837 start.go:353] cluster config:
	{Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:44.548958  285837 out.go:179] * Starting "newest-cni-987495" primary control-plane node in "newest-cni-987495" cluster
	I1213 10:06:44.551733  285837 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:06:44.554789  285837 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:06:44.557547  285837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:06:44.557592  285837 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1213 10:06:44.557602  285837 cache.go:65] Caching tarball of preloaded images
	I1213 10:06:44.557636  285837 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:06:44.557693  285837 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:06:44.557703  285837 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1213 10:06:44.557824  285837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 10:06:44.577619  285837 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:06:44.577644  285837 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:06:44.577660  285837 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:06:44.577696  285837 start.go:360] acquireMachinesLock for newest-cni-987495: {Name:mk0b05e51288e33ec02181c33b2cba54230603c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:06:44.577756  285837 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "newest-cni-987495"
	I1213 10:06:44.577778  285837 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:06:44.577787  285837 fix.go:54] fixHost starting: 
	I1213 10:06:44.578057  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:44.595484  285837 fix.go:112] recreateIfNeeded on newest-cni-987495: state=Stopped err=<nil>
	W1213 10:06:44.595545  285837 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 10:06:43.023116  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:45.025351  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:44.598729  285837 out.go:252] * Restarting existing docker container for "newest-cni-987495" ...
	I1213 10:06:44.598811  285837 cli_runner.go:164] Run: docker start newest-cni-987495
	I1213 10:06:44.855461  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:44.880412  285837 kic.go:430] container "newest-cni-987495" state is running.
	I1213 10:06:44.880797  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:44.909497  285837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/config.json ...
	I1213 10:06:44.909726  285837 machine.go:94] provisionDockerMachine start ...
	I1213 10:06:44.909783  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:44.930622  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:44.931232  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:44.931291  285837 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:06:44.932041  285837 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:06:48.091507  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 10:06:48.091560  285837 ubuntu.go:182] provisioning hostname "newest-cni-987495"
	I1213 10:06:48.091625  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.110757  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:48.111074  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:48.111090  285837 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-987495 && echo "newest-cni-987495" | sudo tee /etc/hostname
	I1213 10:06:48.273955  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-987495
	
	I1213 10:06:48.274083  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.291615  285837 main.go:143] libmachine: Using SSH client type: native
	I1213 10:06:48.291933  285837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1213 10:06:48.291961  285837 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-987495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-987495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-987495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:06:48.443806  285837 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:06:48.443836  285837 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
	I1213 10:06:48.443909  285837 ubuntu.go:190] setting up certificates
	I1213 10:06:48.443925  285837 provision.go:84] configureAuth start
	I1213 10:06:48.444014  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:48.461447  285837 provision.go:143] copyHostCerts
	I1213 10:06:48.461529  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
	I1213 10:06:48.461544  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
	I1213 10:06:48.461626  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
	I1213 10:06:48.461731  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
	I1213 10:06:48.461744  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
	I1213 10:06:48.461773  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
	I1213 10:06:48.461831  285837 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
	I1213 10:06:48.461840  285837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
	I1213 10:06:48.461873  285837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
	I1213 10:06:48.461929  285837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.newest-cni-987495 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-987495]
	I1213 10:06:48.588588  285837 provision.go:177] copyRemoteCerts
	I1213 10:06:48.588677  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:06:48.588742  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.606370  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:48.711093  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:06:48.728291  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:06:48.746238  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:06:48.763841  285837 provision.go:87] duration metric: took 319.890818ms to configureAuth
	I1213 10:06:48.763919  285837 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:06:48.764158  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:48.764172  285837 machine.go:97] duration metric: took 3.854438499s to provisionDockerMachine
	I1213 10:06:48.764181  285837 start.go:293] postStartSetup for "newest-cni-987495" (driver="docker")
	I1213 10:06:48.764199  285837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:06:48.764250  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:06:48.764297  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.781656  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:48.887571  285837 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:06:48.891032  285837 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:06:48.891062  285837 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:06:48.891074  285837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
	I1213 10:06:48.891128  285837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
	I1213 10:06:48.891231  285837 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
	I1213 10:06:48.891336  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 10:06:48.898692  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:06:48.916401  285837 start.go:296] duration metric: took 152.205033ms for postStartSetup
	I1213 10:06:48.916505  285837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:06:48.916556  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:48.933960  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.036570  285837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:06:49.041484  285837 fix.go:56] duration metric: took 4.463690867s for fixHost
	I1213 10:06:49.041511  285837 start.go:83] releasing machines lock for "newest-cni-987495", held for 4.463742733s
	I1213 10:06:49.041581  285837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-987495
	I1213 10:06:49.058404  285837 ssh_runner.go:195] Run: cat /version.json
	I1213 10:06:49.058462  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:49.058542  285837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:06:49.058607  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:49.080342  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.081196  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:49.272327  285837 ssh_runner.go:195] Run: systemctl --version
	I1213 10:06:49.280206  285837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:06:49.285584  285837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:06:49.285649  285837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:06:49.294944  285837 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:06:49.295018  285837 start.go:496] detecting cgroup driver to use...
	I1213 10:06:49.295073  285837 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:06:49.295155  285837 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1213 10:06:49.313555  285837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1213 10:06:49.330142  285837 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:06:49.330250  285837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:06:49.347394  285837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:06:49.361017  285837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:06:49.470304  285837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:06:49.578011  285837 docker.go:234] disabling docker service ...
	I1213 10:06:49.578102  285837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:06:49.592856  285837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:06:49.605575  285837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:06:49.713643  285837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:06:49.824293  285837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:06:49.838298  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:06:49.852989  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1213 10:06:49.861909  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1213 10:06:49.870661  285837 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1213 10:06:49.870784  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1213 10:06:49.879670  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:06:49.888429  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1213 10:06:49.896909  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1213 10:06:49.905618  285837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:06:49.913163  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1213 10:06:49.921632  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1213 10:06:49.930294  285837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1213 10:06:49.939291  285837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:06:49.947067  285837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:06:49.954313  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:50.072981  285837 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1213 10:06:50.196904  285837 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1213 10:06:50.196994  285837 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1213 10:06:50.200903  285837 start.go:564] Will wait 60s for crictl version
	I1213 10:06:50.201048  285837 ssh_runner.go:195] Run: which crictl
	I1213 10:06:50.204672  285837 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:06:50.230484  285837 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1213 10:06:50.230603  285837 ssh_runner.go:195] Run: containerd --version
	I1213 10:06:50.250716  285837 ssh_runner.go:195] Run: containerd --version
	I1213 10:06:50.275578  285837 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1213 10:06:50.278424  285837 cli_runner.go:164] Run: docker network inspect newest-cni-987495 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:06:50.294657  285837 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 10:06:50.298351  285837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:06:50.310828  285837 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 10:06:50.313572  285837 kubeadm.go:884] updating cluster {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:06:50.313727  285837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1213 10:06:50.313810  285837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:06:50.342567  285837 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:06:50.342593  285837 containerd.go:534] Images already preloaded, skipping extraction
	I1213 10:06:50.342654  285837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:06:50.371166  285837 containerd.go:627] all images are preloaded for containerd runtime.
	I1213 10:06:50.371189  285837 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:06:50.371197  285837 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1213 10:06:50.371299  285837 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-987495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:06:50.371378  285837 ssh_runner.go:195] Run: sudo crictl info
	I1213 10:06:50.396100  285837 cni.go:84] Creating CNI manager for ""
	I1213 10:06:50.396123  285837 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 10:06:50.396165  285837 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 10:06:50.396196  285837 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-987495 NodeName:newest-cni-987495 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:06:50.396373  285837 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-987495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:06:50.396459  285837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:06:50.404329  285837 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:06:50.404398  285837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:06:50.411842  285837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1213 10:06:50.424649  285837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:06:50.442140  285837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1213 10:06:50.455154  285837 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:06:50.459006  285837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:06:50.468675  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:50.580293  285837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:06:50.596864  285837 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495 for IP: 192.168.85.2
	I1213 10:06:50.596887  285837 certs.go:195] generating shared ca certs ...
	I1213 10:06:50.596905  285837 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:50.597091  285837 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
	I1213 10:06:50.597205  285837 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
	I1213 10:06:50.597223  285837 certs.go:257] generating profile certs ...
	I1213 10:06:50.597356  285837 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/client.key
	I1213 10:06:50.597436  285837 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key.bb69770e
	I1213 10:06:50.597506  285837 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key
	I1213 10:06:50.597658  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
	W1213 10:06:50.597722  285837 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
	I1213 10:06:50.597739  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:06:50.597785  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:06:50.597830  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:06:50.597864  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
	I1213 10:06:50.597929  285837 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
	I1213 10:06:50.598639  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:06:50.618438  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:06:50.636641  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:06:50.654754  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:06:50.674470  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:06:50.692387  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 10:06:50.709515  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:06:50.726691  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/newest-cni-987495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 10:06:50.744316  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
	I1213 10:06:50.762153  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:06:50.779459  285837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
	I1213 10:06:50.799850  285837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:06:50.814739  285837 ssh_runner.go:195] Run: openssl version
	I1213 10:06:50.821667  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.831484  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
	I1213 10:06:50.840240  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.844034  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.844100  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
	I1213 10:06:50.885521  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:06:50.892992  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.900259  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:06:50.907747  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.911335  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.911425  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:06:50.952315  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:06:50.959952  285837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.967099  285837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
	I1213 10:06:50.974300  285837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.977776  285837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
	I1213 10:06:50.977836  285837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
	I1213 10:06:51.019185  285837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:06:51.026990  285837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:06:51.031010  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:06:51.084662  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:06:51.132673  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:06:51.177864  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:06:51.221006  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:06:51.268266  285837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:06:51.309760  285837 kubeadm.go:401] StartCluster: {Name:newest-cni-987495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-987495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:06:51.309854  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1213 10:06:51.309920  285837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:06:51.336480  285837 cri.go:89] found id: ""
	I1213 10:06:51.336643  285837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:06:51.344873  285837 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:06:51.344892  285837 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:06:51.344971  285837 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:06:51.352443  285837 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:06:51.353090  285837 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-987495" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:51.353376  285837 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-2315/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-987495" cluster setting kubeconfig missing "newest-cni-987495" context setting]
	I1213 10:06:51.353816  285837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.355217  285837 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:06:51.362937  285837 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 10:06:51.363006  285837 kubeadm.go:602] duration metric: took 18.107502ms to restartPrimaryControlPlane
	I1213 10:06:51.363022  285837 kubeadm.go:403] duration metric: took 53.271819ms to StartCluster
	I1213 10:06:51.363041  285837 settings.go:142] acquiring lock: {Name:mk4c336d37880cd61e69d15cfc9fed77a8c1e75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.363105  285837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:06:51.363987  285837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/kubeconfig: {Name:mk7d9aea9ff59bb5f47932965036766b0189e375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:06:51.364220  285837 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:06:51.364499  285837 config.go:182] Loaded profile config "newest-cni-987495": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:06:51.364635  285837 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:06:51.364717  285837 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-987495"
	I1213 10:06:51.364742  285837 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-987495"
	I1213 10:06:51.364767  285837 addons.go:70] Setting default-storageclass=true in profile "newest-cni-987495"
	I1213 10:06:51.364819  285837 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-987495"
	I1213 10:06:51.364774  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.365187  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.365396  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.364741  285837 addons.go:70] Setting dashboard=true in profile "newest-cni-987495"
	I1213 10:06:51.365978  285837 addons.go:239] Setting addon dashboard=true in "newest-cni-987495"
	W1213 10:06:51.365987  285837 addons.go:248] addon dashboard should already be in state true
	I1213 10:06:51.366008  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.366429  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.370287  285837 out.go:179] * Verifying Kubernetes components...
	I1213 10:06:51.373474  285837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:06:51.400526  285837 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 10:06:51.404501  285837 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 10:06:51.407418  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 10:06:51.407443  285837 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 10:06:51.407622  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.417800  285837 addons.go:239] Setting addon default-storageclass=true in "newest-cni-987495"
	I1213 10:06:51.417844  285837 host.go:66] Checking if "newest-cni-987495" exists ...
	I1213 10:06:51.418251  285837 cli_runner.go:164] Run: docker container inspect newest-cni-987495 --format={{.State.Status}}
	I1213 10:06:51.419100  285837 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1213 10:06:47.522700  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:49.522769  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:51.523631  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:51.423855  285837 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:51.423880  285837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:06:51.423942  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.466299  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.483641  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.486041  285837 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:51.486059  285837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:06:51.486115  285837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-987495
	I1213 10:06:51.509387  285837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/newest-cni-987495/id_rsa Username:docker}
	I1213 10:06:51.646942  285837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:06:51.680839  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 10:06:51.680862  285837 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 10:06:51.697914  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 10:06:51.697938  285837 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 10:06:51.704518  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:51.713551  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:51.723021  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 10:06:51.723048  285837 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 10:06:51.778125  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 10:06:51.778149  285837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 10:06:51.806697  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 10:06:51.806719  285837 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 10:06:51.819170  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 10:06:51.819253  285837 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 10:06:51.832331  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 10:06:51.832355  285837 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 10:06:51.845336  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 10:06:51.845362  285837 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 10:06:51.859132  285837 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:06:51.859155  285837 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 10:06:51.872954  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:06:52.275964  285837 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:06:52.276037  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:52.276137  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276165  285837 retry.go:31] will retry after 226.70351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.276226  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276237  285837 retry.go:31] will retry after 265.695109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.276427  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.276440  285837 retry.go:31] will retry after 287.765057ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.503091  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:52.542820  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:52.565377  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:52.583674  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.583713  285837 retry.go:31] will retry after 384.757306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.624746  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.624777  285837 retry.go:31] will retry after 404.862658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:52.656044  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.656099  285837 retry.go:31] will retry after 520.967054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:52.776249  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:52.969189  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:53.030822  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:53.051878  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.051909  285837 retry.go:31] will retry after 644.635232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:06:53.146104  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.146138  285837 retry.go:31] will retry after 713.617137ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.177278  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:53.244074  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.244105  285837 retry.go:31] will retry after 478.208285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.276451  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:53.697474  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:53.722935  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:53.763188  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.763282  285837 retry.go:31] will retry after 791.669242ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.776509  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:53.833584  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.833619  285837 retry.go:31] will retry after 1.106769375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.860665  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:53.922352  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:53.922382  285837 retry.go:31] will retry after 439.211444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.277094  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:54.023458  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:06:56.023636  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:54.362407  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:54.425741  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.425772  285837 retry.go:31] will retry after 994.413015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.555979  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:06:54.643378  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.643410  285837 retry.go:31] will retry after 1.597794919s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:54.776687  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:54.941378  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:55.010057  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.010106  285837 retry.go:31] will retry after 1.576792043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.276187  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:55.420648  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:06:55.480113  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.480142  285837 retry.go:31] will retry after 2.26666641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:55.776309  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:56.242125  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:06:56.276562  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:56.308877  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.308912  285837 retry.go:31] will retry after 2.70852063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.587192  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:56.650840  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.650869  285837 retry.go:31] will retry after 1.746680045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:56.776898  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:57.276239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:57.747110  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:06:57.776721  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:57.808824  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:57.808896  285837 retry.go:31] will retry after 3.338979851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.276183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:58.397695  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:06:58.460604  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.460637  285837 retry.go:31] will retry after 1.622921048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:58.776104  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:06:59.018609  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:06:59.122924  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:59.122951  285837 retry.go:31] will retry after 3.647698418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:06:59.276167  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:06:58.523051  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:01.022919  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:06:59.776456  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:00.084206  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:00.276658  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:00.330895  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:00.330933  285837 retry.go:31] will retry after 4.848981129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:00.776778  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:01.148539  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:01.211860  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:01.211894  285837 retry.go:31] will retry after 4.161832977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:01.277039  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:01.776560  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:02.276839  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:02.771686  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:07:02.776972  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:02.901393  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:02.901424  285837 retry.go:31] will retry after 5.549971544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:03.276936  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:03.776830  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:04.276724  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:03.522677  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:05.522772  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:04.777224  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:05.180067  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:07:05.247404  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.247439  285837 retry.go:31] will retry after 4.476695877s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.276547  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:05.374229  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:05.433759  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.433787  285837 retry.go:31] will retry after 4.37892264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:05.776166  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:06.276368  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:06.776601  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:07.276152  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:07.777077  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:08.277179  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:08.451866  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:08.512981  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:08.513027  285837 retry.go:31] will retry after 9.372893328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:08.776155  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:09.276770  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:08.022655  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:10.022768  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:09.724392  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:09.776822  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:09.785453  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.785488  285837 retry.go:31] will retry after 5.955337388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.813514  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:09.876563  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:09.876594  285837 retry.go:31] will retry after 6.585328869s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:10.276122  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:10.776152  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:11.276997  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:11.776748  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:12.276867  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:12.777071  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:13.276725  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:13.776915  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:14.276832  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:12.022989  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:14.522670  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:14.777034  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:15.277144  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:15.741108  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:15.776723  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:15.809076  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:15.809111  285837 retry.go:31] will retry after 8.411412429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.276706  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:16.462334  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:16.524133  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.524164  285837 retry.go:31] will retry after 16.275248342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:16.776613  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.276278  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.776240  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:17.886523  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:17.954531  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:17.954562  285837 retry.go:31] will retry after 10.907278655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:18.276175  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:18.776243  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:19.276722  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:17.022862  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:19.522763  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:21.522806  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:19.776239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:20.276570  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:20.776244  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:21.277087  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:21.776477  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:22.276171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:22.777167  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:23.276540  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:23.776720  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:24.220799  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:24.276447  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:24.283800  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:24.283834  285837 retry.go:31] will retry after 19.949258949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:07:24.022701  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:26.023564  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:24.776758  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:25.276211  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:25.776711  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:26.276227  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:26.776716  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:27.276229  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:27.776183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.276941  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.776226  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:28.862833  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:28.922616  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:28.922648  285837 retry.go:31] will retry after 8.454738907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:29.277083  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:28.522731  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:30.522938  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:29.776182  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:30.277060  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:30.776835  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:31.276746  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:31.776414  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.276209  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.776715  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:32.799816  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:32.901801  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:32.901845  285837 retry.go:31] will retry after 14.65260505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:33.276216  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:33.776222  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:34.276756  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:33.022800  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:35.522770  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:34.776764  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:35.277073  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:35.776211  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:36.276331  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:36.776510  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:37.276183  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:37.378406  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:37.440661  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:37.440691  285837 retry.go:31] will retry after 16.048870296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:37.776113  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:38.276917  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:38.776758  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:39.276296  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:38.022809  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:40.522836  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:39.776735  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:40.276749  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:40.777116  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:41.277172  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:41.776857  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:42.277141  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:42.776207  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:43.276171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:43.776690  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:44.233363  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 10:07:44.276911  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:44.294603  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:44.294641  285837 retry.go:31] will retry after 45.098120748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:07:42.523034  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:45.022823  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:44.776742  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:45.276466  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:45.776133  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:46.280870  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:46.776232  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:47.276987  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:47.554729  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:07:47.616803  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:47.616837  285837 retry.go:31] will retry after 38.754607023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:47.776168  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:48.276203  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:48.776412  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:49.276189  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 10:07:47.022949  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:49.522878  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:49.776177  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:50.277157  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:50.776201  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:51.276146  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:51.776144  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:51.776242  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:51.804204  285837 cri.go:89] found id: ""
	I1213 10:07:51.804236  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.804246  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:51.804253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:51.804314  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:51.829636  285837 cri.go:89] found id: ""
	I1213 10:07:51.829669  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.829679  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:51.829685  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:51.829745  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:51.857487  285837 cri.go:89] found id: ""
	I1213 10:07:51.857510  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.857519  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:51.857525  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:51.857590  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:51.881972  285837 cri.go:89] found id: ""
	I1213 10:07:51.881998  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.882006  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:51.882012  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:51.882072  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:51.906050  285837 cri.go:89] found id: ""
	I1213 10:07:51.906074  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.906083  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:51.906089  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:51.906149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:51.930678  285837 cri.go:89] found id: ""
	I1213 10:07:51.930700  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.930708  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:51.930715  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:51.930774  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:51.955590  285837 cri.go:89] found id: ""
	I1213 10:07:51.955661  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.955683  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:51.955701  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:51.955786  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:51.979349  285837 cri.go:89] found id: ""
	I1213 10:07:51.979374  285837 logs.go:282] 0 containers: []
	W1213 10:07:51.979382  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:51.979391  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:51.979405  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:52.048255  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:52.039592    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.040290    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.041824    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.042312    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.043873    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:52.039592    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.040290    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.041824    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.042312    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:52.043873    1868 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:52.048276  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:52.048290  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:52.074149  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:52.074187  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:07:52.103113  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:52.103142  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:52.161764  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:52.161797  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:53.489865  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 10:07:53.547700  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:07:53.547730  285837 retry.go:31] will retry after 48.398435893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:07:52.022714  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:54.023780  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:07:56.522671  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:07:54.676402  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:54.686866  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:54.686943  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:54.716493  285837 cri.go:89] found id: ""
	I1213 10:07:54.716514  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.716523  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:54.716529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:54.716584  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:54.740751  285837 cri.go:89] found id: ""
	I1213 10:07:54.740778  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.740787  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:54.740797  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:54.740854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:54.763680  285837 cri.go:89] found id: ""
	I1213 10:07:54.763703  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.763712  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:54.763717  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:54.763773  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:54.787504  285837 cri.go:89] found id: ""
	I1213 10:07:54.787556  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.787564  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:54.787570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:54.787626  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:54.812200  285837 cri.go:89] found id: ""
	I1213 10:07:54.812222  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.812231  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:54.812253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:54.812314  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:54.841586  285837 cri.go:89] found id: ""
	I1213 10:07:54.841613  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.841623  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:54.841629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:54.841687  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:54.865631  285837 cri.go:89] found id: ""
	I1213 10:07:54.865658  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.865667  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:54.865673  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:54.865731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:54.889746  285837 cri.go:89] found id: ""
	I1213 10:07:54.889773  285837 logs.go:282] 0 containers: []
	W1213 10:07:54.889782  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:54.889792  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:54.889803  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:54.945120  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:54.945155  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:54.958121  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:54.958145  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:55.027564  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:55.017674    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.018407    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.020402    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.021114    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.022964    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:55.017674    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.018407    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.020402    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.021114    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:55.022964    1995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:55.027592  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:55.027605  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:55.053752  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:55.053788  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:07:57.584821  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:07:57.597676  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:07:57.597774  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:07:57.621661  285837 cri.go:89] found id: ""
	I1213 10:07:57.621684  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.621692  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:07:57.621699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:07:57.621756  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:07:57.649006  285837 cri.go:89] found id: ""
	I1213 10:07:57.649028  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.649036  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:07:57.649042  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:07:57.649107  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:07:57.672839  285837 cri.go:89] found id: ""
	I1213 10:07:57.672866  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.672875  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:07:57.672881  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:07:57.672937  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:07:57.697343  285837 cri.go:89] found id: ""
	I1213 10:07:57.697366  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.697375  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:07:57.697381  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:07:57.697447  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:07:57.722254  285837 cri.go:89] found id: ""
	I1213 10:07:57.722276  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.722284  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:07:57.722291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:07:57.722346  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:07:57.746125  285837 cri.go:89] found id: ""
	I1213 10:07:57.746150  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.746159  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:07:57.746165  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:07:57.746220  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:07:57.770612  285837 cri.go:89] found id: ""
	I1213 10:07:57.770679  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.770702  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:07:57.770720  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:07:57.770799  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:07:57.795253  285837 cri.go:89] found id: ""
	I1213 10:07:57.795277  285837 logs.go:282] 0 containers: []
	W1213 10:07:57.795285  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:07:57.795294  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:07:57.795320  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:07:57.852923  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:07:57.852957  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:07:57.866320  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:07:57.866350  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:07:57.930573  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:07:57.921741    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.922259    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.923923    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.925244    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.926323    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:07:57.921741    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.922259    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.923923    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.925244    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:07:57.926323    2109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:07:57.930596  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:07:57.930609  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:07:57.955644  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:07:57.955687  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:07:58.522782  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:00.523382  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:00.485873  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:00.498933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:00.499039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:00.588348  285837 cri.go:89] found id: ""
	I1213 10:08:00.588373  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.588383  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:00.588403  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:00.588480  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:00.632508  285837 cri.go:89] found id: ""
	I1213 10:08:00.632581  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.632604  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:00.632623  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:00.632721  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:00.659204  285837 cri.go:89] found id: ""
	I1213 10:08:00.659231  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.659240  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:00.659246  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:00.659303  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:00.685440  285837 cri.go:89] found id: ""
	I1213 10:08:00.685468  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.685477  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:00.685492  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:00.685551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:00.710692  285837 cri.go:89] found id: ""
	I1213 10:08:00.710719  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.710728  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:00.710734  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:00.710791  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:00.736661  285837 cri.go:89] found id: ""
	I1213 10:08:00.736683  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.736692  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:00.736698  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:00.736766  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:00.761591  285837 cri.go:89] found id: ""
	I1213 10:08:00.761617  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.761627  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:00.761634  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:00.761695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:00.786438  285837 cri.go:89] found id: ""
	I1213 10:08:00.786465  285837 logs.go:282] 0 containers: []
	W1213 10:08:00.786474  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:00.786484  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:00.786494  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:00.842291  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:00.842327  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:00.855993  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:00.856020  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:00.925840  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:00.916441    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918096    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918752    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920335    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920931    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:00.916441    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918096    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.918752    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920335    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:00.920931    2219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:00.925874  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:00.925888  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:00.953015  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:00.953064  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:03.486172  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:03.496591  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:03.496662  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:03.534940  285837 cri.go:89] found id: ""
	I1213 10:08:03.534964  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.534973  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:03.534979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:03.535038  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:03.598662  285837 cri.go:89] found id: ""
	I1213 10:08:03.598688  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.598698  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:03.598704  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:03.598766  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:03.624092  285837 cri.go:89] found id: ""
	I1213 10:08:03.624114  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.624122  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:03.624129  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:03.624188  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:03.649153  285837 cri.go:89] found id: ""
	I1213 10:08:03.649176  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.649185  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:03.649196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:03.649255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:03.673710  285837 cri.go:89] found id: ""
	I1213 10:08:03.673778  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.673802  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:03.673822  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:03.673901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:03.698952  285837 cri.go:89] found id: ""
	I1213 10:08:03.698978  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.699004  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:03.699011  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:03.699076  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:03.723499  285837 cri.go:89] found id: ""
	I1213 10:08:03.723548  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.723558  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:03.723563  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:03.723626  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:03.748795  285837 cri.go:89] found id: ""
	I1213 10:08:03.748819  285837 logs.go:282] 0 containers: []
	W1213 10:08:03.748828  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:03.748837  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:03.748848  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:03.812342  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:03.803601    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.804143    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.805763    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.806297    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.807979    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:03.803601    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.804143    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.805763    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.806297    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:03.807979    2325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:03.812368  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:03.812388  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:03.841166  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:03.841206  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:03.871116  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:03.871146  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:03.927807  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:03.927839  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 10:08:03.022774  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:05.522704  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:06.441780  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:06.452228  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:06.452309  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:06.476347  285837 cri.go:89] found id: ""
	I1213 10:08:06.476370  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.476378  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:06.476384  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:06.476441  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:06.504937  285837 cri.go:89] found id: ""
	I1213 10:08:06.504961  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.504970  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:06.504977  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:06.505037  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:06.553519  285837 cri.go:89] found id: ""
	I1213 10:08:06.553545  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.553553  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:06.553559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:06.553619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:06.608223  285837 cri.go:89] found id: ""
	I1213 10:08:06.608249  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.608258  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:06.608264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:06.608322  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:06.639732  285837 cri.go:89] found id: ""
	I1213 10:08:06.639801  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.639816  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:06.639823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:06.639886  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:06.668074  285837 cri.go:89] found id: ""
	I1213 10:08:06.668099  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.668108  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:06.668114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:06.668190  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:06.691695  285837 cri.go:89] found id: ""
	I1213 10:08:06.691720  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.691729  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:06.691735  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:06.691801  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:06.717093  285837 cri.go:89] found id: ""
	I1213 10:08:06.717120  285837 logs.go:282] 0 containers: []
	W1213 10:08:06.717129  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:06.717140  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:06.717152  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:06.773552  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:06.773584  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:06.787064  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:06.787090  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:06.854164  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:06.846017    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.846675    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848150    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848641    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.850183    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:06.846017    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.846675    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848150    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.848641    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:06.850183    2445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:06.854189  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:06.854202  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:06.879668  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:06.879702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:08:08.022653  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:10.022701  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:09.406742  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:09.417411  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:09.417484  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:09.442113  285837 cri.go:89] found id: ""
	I1213 10:08:09.442138  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.442147  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:09.442153  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:09.442218  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:09.466316  285837 cri.go:89] found id: ""
	I1213 10:08:09.466342  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.466351  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:09.466357  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:09.466415  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:09.491678  285837 cri.go:89] found id: ""
	I1213 10:08:09.491703  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.491712  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:09.491718  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:09.491776  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:09.515316  285837 cri.go:89] found id: ""
	I1213 10:08:09.515337  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.515346  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:09.515352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:09.515410  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:09.567095  285837 cri.go:89] found id: ""
	I1213 10:08:09.567116  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.567125  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:09.567131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:09.567197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:09.616045  285837 cri.go:89] found id: ""
	I1213 10:08:09.616067  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.616076  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:09.616082  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:09.616142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:09.640449  285837 cri.go:89] found id: ""
	I1213 10:08:09.640479  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.640488  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:09.640495  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:09.640555  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:09.664888  285837 cri.go:89] found id: ""
	I1213 10:08:09.664912  285837 logs.go:282] 0 containers: []
	W1213 10:08:09.664921  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:09.664930  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:09.664941  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:09.691077  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:09.691106  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:09.747246  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:09.747280  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:09.761112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:09.761140  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:09.830659  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:09.821800    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.822624    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.824372    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.825020    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.826720    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:09.821800    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.822624    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.824372    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.825020    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:09.826720    2572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:09.830682  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:09.830695  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:12.356184  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:12.368119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:12.368203  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:12.394250  285837 cri.go:89] found id: ""
	I1213 10:08:12.394279  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.394291  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:12.394298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:12.394365  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:12.419062  285837 cri.go:89] found id: ""
	I1213 10:08:12.419086  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.419095  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:12.419102  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:12.419159  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:12.446274  285837 cri.go:89] found id: ""
	I1213 10:08:12.446300  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.446308  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:12.446315  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:12.446371  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:12.469875  285837 cri.go:89] found id: ""
	I1213 10:08:12.469901  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.469910  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:12.469917  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:12.469977  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:12.495108  285837 cri.go:89] found id: ""
	I1213 10:08:12.495136  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.495145  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:12.495152  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:12.495207  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:12.521169  285837 cri.go:89] found id: ""
	I1213 10:08:12.521190  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.521198  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:12.521204  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:12.521258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:12.557387  285837 cri.go:89] found id: ""
	I1213 10:08:12.557412  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.557421  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:12.557427  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:12.557483  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:12.586888  285837 cri.go:89] found id: ""
	I1213 10:08:12.586913  285837 logs.go:282] 0 containers: []
	W1213 10:08:12.586922  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:12.586931  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:12.586942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:12.654328  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:12.654361  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:12.668044  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:12.668071  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:12.737226  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:12.728433    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.729118    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.730862    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.731560    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.733263    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:12.728433    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.729118    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.730862    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.731560    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:12.733263    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:12.737248  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:12.737261  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:12.762749  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:12.762783  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:08:12.022956  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:14.522703  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:15.289142  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:15.301958  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:15.302029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:15.330317  285837 cri.go:89] found id: ""
	I1213 10:08:15.330344  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.330353  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:15.330359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:15.330423  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:15.358090  285837 cri.go:89] found id: ""
	I1213 10:08:15.358115  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.358124  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:15.358130  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:15.358187  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:15.382832  285837 cri.go:89] found id: ""
	I1213 10:08:15.382862  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.382871  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:15.382877  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:15.382940  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:15.409515  285837 cri.go:89] found id: ""
	I1213 10:08:15.409539  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.409549  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:15.409555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:15.409613  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:15.433885  285837 cri.go:89] found id: ""
	I1213 10:08:15.433911  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.433920  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:15.433926  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:15.433989  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:15.458618  285837 cri.go:89] found id: ""
	I1213 10:08:15.458643  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.458653  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:15.458659  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:15.458715  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:15.482592  285837 cri.go:89] found id: ""
	I1213 10:08:15.482616  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.482625  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:15.482635  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:15.482693  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:15.511125  285837 cri.go:89] found id: ""
	I1213 10:08:15.511153  285837 logs.go:282] 0 containers: []
	W1213 10:08:15.511163  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:15.511172  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:15.511183  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:15.584797  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:15.584833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:15.598725  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:15.598752  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:15.681678  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:15.672277    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.673044    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.674516    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.675095    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.676657    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:15.672277    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.673044    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.674516    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.675095    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:15.676657    2787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:15.681701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:15.681714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:15.707610  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:15.707646  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:18.235184  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:18.246689  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:18.246762  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:18.271129  285837 cri.go:89] found id: ""
	I1213 10:08:18.271155  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.271165  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:18.271172  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:18.271240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:18.296110  285837 cri.go:89] found id: ""
	I1213 10:08:18.296135  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.296144  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:18.296150  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:18.296208  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:18.321267  285837 cri.go:89] found id: ""
	I1213 10:08:18.321290  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.321304  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:18.321311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:18.321368  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:18.349274  285837 cri.go:89] found id: ""
	I1213 10:08:18.349300  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.349309  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:18.349315  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:18.349414  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:18.373235  285837 cri.go:89] found id: ""
	I1213 10:08:18.373310  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.373325  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:18.373335  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:18.373395  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:18.397157  285837 cri.go:89] found id: ""
	I1213 10:08:18.397181  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.397190  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:18.397196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:18.397283  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:18.421144  285837 cri.go:89] found id: ""
	I1213 10:08:18.421168  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.421177  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:18.421184  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:18.421243  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:18.449567  285837 cri.go:89] found id: ""
	I1213 10:08:18.449643  285837 logs.go:282] 0 containers: []
	W1213 10:08:18.449659  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:18.449670  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:18.449682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:18.505803  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:18.505836  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:18.520075  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:18.520099  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:18.640681  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:18.632737    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.633402    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.634626    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.635073    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.636570    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:18.632737    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.633402    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.634626    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.635073    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:18.636570    2905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:18.640706  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:18.640720  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:18.666166  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:18.666201  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:08:17.022884  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	W1213 10:08:19.522795  279351 node_ready.go:55] error getting node "no-preload-328069" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-328069": dial tcp 192.168.76.2:8443: connect: connection refused
	I1213 10:08:20.031934  279351 node_ready.go:38] duration metric: took 6m0.009733727s for node "no-preload-328069" to be "Ready" ...
	I1213 10:08:20.035146  279351 out.go:203] 
	W1213 10:08:20.038039  279351 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:08:20.038064  279351 out.go:285] * 
	W1213 10:08:20.040199  279351 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:08:20.043110  279351 out.go:203] 
	I1213 10:08:21.195745  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:21.206020  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:21.206084  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:21.246086  285837 cri.go:89] found id: ""
	I1213 10:08:21.246106  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.246115  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:21.246122  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:21.246181  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:21.273446  285837 cri.go:89] found id: ""
	I1213 10:08:21.273469  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.273477  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:21.273483  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:21.273543  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:21.312010  285837 cri.go:89] found id: ""
	I1213 10:08:21.312031  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.312040  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:21.312046  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:21.312104  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:21.357158  285837 cri.go:89] found id: ""
	I1213 10:08:21.357177  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.357185  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:21.357192  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:21.357248  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:21.398112  285837 cri.go:89] found id: ""
	I1213 10:08:21.398135  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.398143  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:21.398149  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:21.398205  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:21.447244  285837 cri.go:89] found id: ""
	I1213 10:08:21.447268  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.447276  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:21.447283  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:21.447347  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:21.495558  285837 cri.go:89] found id: ""
	I1213 10:08:21.495581  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.495589  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:21.495595  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:21.495652  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:21.555224  285837 cri.go:89] found id: ""
	I1213 10:08:21.555248  285837 logs.go:282] 0 containers: []
	W1213 10:08:21.555257  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:21.555270  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:21.555281  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:21.627890  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:21.627922  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:21.674689  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:21.674714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:21.747238  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:21.747267  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:21.763785  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:21.763813  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:21.844164  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:21.834312    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.835883    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.836677    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838349    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838777    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:21.834312    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.835883    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.836677    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838349    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:21.838777    3027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:24.345832  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:24.356414  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:24.356487  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:24.381314  285837 cri.go:89] found id: ""
	I1213 10:08:24.381340  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.381349  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:24.381356  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:24.381418  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:24.405581  285837 cri.go:89] found id: ""
	I1213 10:08:24.405606  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.405614  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:24.405621  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:24.405679  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:24.429873  285837 cri.go:89] found id: ""
	I1213 10:08:24.429895  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.429904  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:24.429911  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:24.429971  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:24.457573  285837 cri.go:89] found id: ""
	I1213 10:08:24.457600  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.457609  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:24.457616  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:24.457674  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:24.481838  285837 cri.go:89] found id: ""
	I1213 10:08:24.481865  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.481874  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:24.481880  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:24.481937  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:24.507009  285837 cri.go:89] found id: ""
	I1213 10:08:24.507034  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.507043  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:24.507049  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:24.507105  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:24.550665  285837 cri.go:89] found id: ""
	I1213 10:08:24.550687  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.550695  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:24.550702  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:24.550757  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:24.584765  285837 cri.go:89] found id: ""
	I1213 10:08:24.584787  285837 logs.go:282] 0 containers: []
	W1213 10:08:24.584805  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:24.584815  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:24.584828  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:24.652249  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:24.643336    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.644158    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.645848    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.646388    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.648047    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:24.643336    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.644158    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.645848    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.646388    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:24.648047    3116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:24.652271  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:24.652285  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:24.677128  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:24.677161  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:24.705609  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:24.705635  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:24.761364  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:24.761399  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:26.371661  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 10:08:26.432065  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:08:26.432188  285837 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:08:27.285248  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:27.295647  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:27.295723  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:27.320532  285837 cri.go:89] found id: ""
	I1213 10:08:27.320555  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.320564  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:27.320570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:27.320628  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:27.344722  285837 cri.go:89] found id: ""
	I1213 10:08:27.344748  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.344758  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:27.344764  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:27.344852  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:27.370726  285837 cri.go:89] found id: ""
	I1213 10:08:27.370751  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.370760  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:27.370766  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:27.370849  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:27.394557  285837 cri.go:89] found id: ""
	I1213 10:08:27.394583  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.394617  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:27.394628  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:27.394703  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:27.418575  285837 cri.go:89] found id: ""
	I1213 10:08:27.418601  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.418610  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:27.418616  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:27.418673  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:27.444932  285837 cri.go:89] found id: ""
	I1213 10:08:27.444953  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.444962  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:27.444968  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:27.445029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:27.468135  285837 cri.go:89] found id: ""
	I1213 10:08:27.468213  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.468237  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:27.468256  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:27.468330  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:27.493054  285837 cri.go:89] found id: ""
	I1213 10:08:27.493079  285837 logs.go:282] 0 containers: []
	W1213 10:08:27.493089  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:27.493098  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:27.493126  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:27.555066  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:27.555141  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:27.572569  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:27.572644  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:27.641611  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:27.634328    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.634722    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.635977    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.636303    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.637740    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:27.634328    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.634722    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.635977    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.636303    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:27.637740    3239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:27.641682  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:27.641704  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:27.667653  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:27.667690  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:29.393883  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 10:08:29.454286  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:08:29.454393  285837 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:08:30.208961  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:30.219829  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:30.219950  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:30.248442  285837 cri.go:89] found id: ""
	I1213 10:08:30.248471  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.248480  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:30.248486  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:30.248569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:30.273935  285837 cri.go:89] found id: ""
	I1213 10:08:30.273964  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.273973  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:30.273979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:30.274067  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:30.299229  285837 cri.go:89] found id: ""
	I1213 10:08:30.299256  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.299265  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:30.299271  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:30.299328  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:30.327770  285837 cri.go:89] found id: ""
	I1213 10:08:30.327792  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.327801  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:30.327807  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:30.327863  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:30.352796  285837 cri.go:89] found id: ""
	I1213 10:08:30.352851  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.352861  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:30.352867  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:30.352928  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:30.376505  285837 cri.go:89] found id: ""
	I1213 10:08:30.376530  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.376539  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:30.376546  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:30.376646  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:30.400512  285837 cri.go:89] found id: ""
	I1213 10:08:30.400536  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.400545  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:30.400551  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:30.400611  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:30.425139  285837 cri.go:89] found id: ""
	I1213 10:08:30.425162  285837 logs.go:282] 0 containers: []
	W1213 10:08:30.425171  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:30.425181  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:30.425192  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:30.454686  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:30.454713  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:30.509531  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:30.509568  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:30.527699  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:30.527727  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:30.597883  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:30.589727    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.590173    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.591765    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.592345    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.594029    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:30.589727    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.590173    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.591765    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.592345    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:30.594029    3370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:30.597907  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:30.597920  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:33.123638  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:33.134229  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:33.134302  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:33.161169  285837 cri.go:89] found id: ""
	I1213 10:08:33.161201  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.161210  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:33.161218  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:33.161278  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:33.189591  285837 cri.go:89] found id: ""
	I1213 10:08:33.189614  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.189623  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:33.189629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:33.189691  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:33.213288  285837 cri.go:89] found id: ""
	I1213 10:08:33.213315  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.213325  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:33.213331  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:33.213388  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:33.237186  285837 cri.go:89] found id: ""
	I1213 10:08:33.237214  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.237223  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:33.237230  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:33.237291  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:33.265589  285837 cri.go:89] found id: ""
	I1213 10:08:33.265615  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.265623  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:33.265629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:33.265687  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:33.289791  285837 cri.go:89] found id: ""
	I1213 10:08:33.289862  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.289884  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:33.289902  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:33.289986  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:33.314058  285837 cri.go:89] found id: ""
	I1213 10:08:33.314085  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.314094  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:33.314099  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:33.314170  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:33.338463  285837 cri.go:89] found id: ""
	I1213 10:08:33.338490  285837 logs.go:282] 0 containers: []
	W1213 10:08:33.338499  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:33.338509  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:33.338521  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:33.393919  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:33.393953  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:33.407152  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:33.407179  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:33.470838  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:33.463012    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.463421    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465099    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465564    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.466988    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:33.463012    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.463421    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465099    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.465564    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:33.466988    3472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:33.470862  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:33.470875  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:33.495641  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:33.495672  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:36.035663  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:36.047578  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:36.047649  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:36.076122  285837 cri.go:89] found id: ""
	I1213 10:08:36.076145  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.076154  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:36.076160  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:36.076236  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:36.105524  285837 cri.go:89] found id: ""
	I1213 10:08:36.105554  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.105564  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:36.105570  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:36.105629  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:36.134491  285837 cri.go:89] found id: ""
	I1213 10:08:36.134565  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.134587  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:36.134607  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:36.134695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:36.159376  285837 cri.go:89] found id: ""
	I1213 10:08:36.159449  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.159471  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:36.159489  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:36.159608  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:36.185490  285837 cri.go:89] found id: ""
	I1213 10:08:36.185565  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.185590  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:36.185604  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:36.185676  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:36.219394  285837 cri.go:89] found id: ""
	I1213 10:08:36.219422  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.219431  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:36.219438  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:36.219494  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:36.243333  285837 cri.go:89] found id: ""
	I1213 10:08:36.243357  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.243367  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:36.243373  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:36.243435  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:36.267160  285837 cri.go:89] found id: ""
	I1213 10:08:36.267187  285837 logs.go:282] 0 containers: []
	W1213 10:08:36.267196  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:36.267206  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:36.267218  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:36.280345  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:36.280375  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:36.343250  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:36.335308    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.335892    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337458    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337934    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.339453    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:36.335308    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.335892    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337458    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.337934    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:36.339453    3583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:36.343272  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:36.343284  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:36.368575  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:36.368610  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:36.395546  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:36.395573  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:38.955916  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:38.966663  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:38.966732  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:38.991698  285837 cri.go:89] found id: ""
	I1213 10:08:38.991722  285837 logs.go:282] 0 containers: []
	W1213 10:08:38.991730  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:38.991737  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:38.991795  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:39.029472  285837 cri.go:89] found id: ""
	I1213 10:08:39.029501  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.029510  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:39.029515  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:39.029610  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:39.058052  285837 cri.go:89] found id: ""
	I1213 10:08:39.058082  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.058097  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:39.058104  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:39.058165  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:39.086309  285837 cri.go:89] found id: ""
	I1213 10:08:39.086331  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.086339  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:39.086345  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:39.086407  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:39.113392  285837 cri.go:89] found id: ""
	I1213 10:08:39.113420  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.113430  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:39.113436  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:39.113497  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:39.138083  285837 cri.go:89] found id: ""
	I1213 10:08:39.138109  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.138118  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:39.138125  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:39.138182  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:39.162132  285837 cri.go:89] found id: ""
	I1213 10:08:39.162160  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.162170  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:39.162176  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:39.162239  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:39.190634  285837 cri.go:89] found id: ""
	I1213 10:08:39.190661  285837 logs.go:282] 0 containers: []
	W1213 10:08:39.190670  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:39.190679  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:39.190691  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:39.215694  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:39.215727  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:39.246161  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:39.246189  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:39.305962  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:39.305996  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:39.319717  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:39.319744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:39.382189  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:39.374352    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.374762    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376328    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376646    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.378097    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:39.374352    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.374762    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376328    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.376646    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:39.378097    3708 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:41.883328  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:41.894154  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:41.894228  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:41.921476  285837 cri.go:89] found id: ""
	I1213 10:08:41.921500  285837 logs.go:282] 0 containers: []
	W1213 10:08:41.921509  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:41.921515  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:41.921573  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:41.945812  285837 cri.go:89] found id: ""
	I1213 10:08:41.945835  285837 logs.go:282] 0 containers: []
	W1213 10:08:41.945843  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:41.945849  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:41.945912  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:41.946276  285837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:08:41.977805  285837 cri.go:89] found id: ""
	I1213 10:08:41.977840  285837 logs.go:282] 0 containers: []
	W1213 10:08:41.977849  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:41.977855  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:41.977923  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1213 10:08:42.037880  285837 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:08:42.037998  285837 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:08:42.038333  285837 cri.go:89] found id: ""
	I1213 10:08:42.038351  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.038357  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:42.038364  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:42.038439  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:42.041174  285837 out.go:179] * Enabled addons: 
	I1213 10:08:42.044041  285837 addons.go:530] duration metric: took 1m50.679416537s for enable addons: enabled=[]
	I1213 10:08:42.069124  285837 cri.go:89] found id: ""
	I1213 10:08:42.069158  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.069173  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:42.069181  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:42.069277  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:42.114076  285837 cri.go:89] found id: ""
	I1213 10:08:42.114106  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.114119  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:42.114129  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:42.114201  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:42.143501  285837 cri.go:89] found id: ""
	I1213 10:08:42.143577  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.143587  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:42.143594  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:42.143665  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:42.174231  285837 cri.go:89] found id: ""
	I1213 10:08:42.174258  285837 logs.go:282] 0 containers: []
	W1213 10:08:42.174267  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:42.174278  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:42.174291  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:42.209465  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:42.209500  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:42.270663  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:42.270702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:42.286732  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:42.286769  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:42.356785  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:42.348392    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.349131    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.350759    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.351267    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.352901    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:42.348392    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.349131    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.350759    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.351267    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:42.352901    3825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:42.356809  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:42.356822  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:44.882858  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:44.893320  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:44.893392  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:44.918585  285837 cri.go:89] found id: ""
	I1213 10:08:44.918612  285837 logs.go:282] 0 containers: []
	W1213 10:08:44.918621  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:44.918628  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:44.918686  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:44.943719  285837 cri.go:89] found id: ""
	I1213 10:08:44.943746  285837 logs.go:282] 0 containers: []
	W1213 10:08:44.943755  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:44.943762  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:44.943822  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:44.968177  285837 cri.go:89] found id: ""
	I1213 10:08:44.968204  285837 logs.go:282] 0 containers: []
	W1213 10:08:44.968213  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:44.968219  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:44.968273  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:45.012025  285837 cri.go:89] found id: ""
	I1213 10:08:45.012052  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.012062  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:45.012069  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:45.012140  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:45.059717  285837 cri.go:89] found id: ""
	I1213 10:08:45.059815  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.059841  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:45.059864  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:45.059985  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:45.146429  285837 cri.go:89] found id: ""
	I1213 10:08:45.146507  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.146534  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:45.146585  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:45.146680  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:45.192650  285837 cri.go:89] found id: ""
	I1213 10:08:45.192683  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.192695  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:45.192704  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:45.192786  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:45.240936  285837 cri.go:89] found id: ""
	I1213 10:08:45.241266  285837 logs.go:282] 0 containers: []
	W1213 10:08:45.241306  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:45.241344  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:45.241423  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:45.280178  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:45.280250  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:45.343980  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:45.344023  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:45.357799  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:45.357833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:45.421366  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:45.413373    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.413837    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415405    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415812    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.417291    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:45.413373    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.413837    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415405    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.415812    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:45.417291    3938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:45.421390  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:45.421403  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:47.952239  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:47.963745  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:47.963816  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:47.989230  285837 cri.go:89] found id: ""
	I1213 10:08:47.989253  285837 logs.go:282] 0 containers: []
	W1213 10:08:47.989262  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:47.989288  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:47.989360  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:48.018062  285837 cri.go:89] found id: ""
	I1213 10:08:48.018087  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.018096  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:48.018102  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:48.018165  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:48.049042  285837 cri.go:89] found id: ""
	I1213 10:08:48.049068  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.049078  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:48.049084  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:48.049147  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:48.077924  285837 cri.go:89] found id: ""
	I1213 10:08:48.077946  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.077955  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:48.077965  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:48.078023  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:48.106258  285837 cri.go:89] found id: ""
	I1213 10:08:48.106284  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.106292  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:48.106298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:48.106355  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:48.130836  285837 cri.go:89] found id: ""
	I1213 10:08:48.130861  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.130869  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:48.130883  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:48.130945  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:48.157446  285837 cri.go:89] found id: ""
	I1213 10:08:48.157470  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.157479  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:48.157485  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:48.157543  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:48.182657  285837 cri.go:89] found id: ""
	I1213 10:08:48.182687  285837 logs.go:282] 0 containers: []
	W1213 10:08:48.182697  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:48.182707  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:48.182719  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:48.196607  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:48.196685  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:48.261824  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:48.252959    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.253768    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.255478    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.256212    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.257905    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:48.252959    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.253768    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.255478    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.256212    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:48.257905    4039 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:48.261895  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:48.261914  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:48.287393  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:48.287436  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:48.318617  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:48.318647  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:50.875656  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:50.886169  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:50.886240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:50.910775  285837 cri.go:89] found id: ""
	I1213 10:08:50.910801  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.910810  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:50.910817  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:50.910874  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:50.936159  285837 cri.go:89] found id: ""
	I1213 10:08:50.936185  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.936194  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:50.936200  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:50.936262  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:50.960845  285837 cri.go:89] found id: ""
	I1213 10:08:50.960879  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.960888  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:50.960895  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:50.960956  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:50.989232  285837 cri.go:89] found id: ""
	I1213 10:08:50.989262  285837 logs.go:282] 0 containers: []
	W1213 10:08:50.989271  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:50.989277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:50.989361  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:51.017908  285837 cri.go:89] found id: ""
	I1213 10:08:51.017936  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.017944  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:51.017950  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:51.018012  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:51.062320  285837 cri.go:89] found id: ""
	I1213 10:08:51.062355  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.062363  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:51.062369  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:51.062436  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:51.091004  285837 cri.go:89] found id: ""
	I1213 10:08:51.091038  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.091047  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:51.091053  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:51.091118  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:51.116510  285837 cri.go:89] found id: ""
	I1213 10:08:51.116543  285837 logs.go:282] 0 containers: []
	W1213 10:08:51.116552  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:51.116561  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:51.116574  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:51.147665  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:51.147690  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:51.203425  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:51.203457  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:51.216632  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:51.216657  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:51.278157  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:51.270057    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.270725    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272274    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272754    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.274265    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:51.270057    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.270725    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272274    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.272754    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:51.274265    4167 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:51.278181  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:51.278195  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:53.804075  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:53.815823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:53.815894  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:53.841157  285837 cri.go:89] found id: ""
	I1213 10:08:53.841180  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.841189  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:53.841195  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:53.841251  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:53.869816  285837 cri.go:89] found id: ""
	I1213 10:08:53.869840  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.869850  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:53.869856  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:53.869916  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:53.893754  285837 cri.go:89] found id: ""
	I1213 10:08:53.893781  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.893789  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:53.893796  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:53.893856  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:53.917859  285837 cri.go:89] found id: ""
	I1213 10:08:53.917881  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.917890  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:53.917896  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:53.917957  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:53.941859  285837 cri.go:89] found id: ""
	I1213 10:08:53.941886  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.941895  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:53.941902  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:53.941964  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:53.969296  285837 cri.go:89] found id: ""
	I1213 10:08:53.969320  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.969329  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:53.969335  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:53.969392  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:53.993419  285837 cri.go:89] found id: ""
	I1213 10:08:53.993448  285837 logs.go:282] 0 containers: []
	W1213 10:08:53.993458  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:53.993464  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:53.993520  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:54.026047  285837 cri.go:89] found id: ""
	I1213 10:08:54.026074  285837 logs.go:282] 0 containers: []
	W1213 10:08:54.026084  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:54.026094  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:54.026106  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:54.042132  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:54.042160  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:54.121343  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:54.112495    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.113229    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115109    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115795    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.117272    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:54.112495    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.113229    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115109    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.115795    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:54.117272    4266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:54.121416  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:54.121439  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:54.146468  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:54.146502  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:54.173087  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:54.173114  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:56.730884  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:56.741016  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:56.741083  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:56.765437  285837 cri.go:89] found id: ""
	I1213 10:08:56.765461  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.765470  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:56.765476  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:56.765535  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:56.804701  285837 cri.go:89] found id: ""
	I1213 10:08:56.804725  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.804734  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:56.804740  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:56.804796  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:56.831548  285837 cri.go:89] found id: ""
	I1213 10:08:56.831573  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.831582  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:56.831588  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:56.831646  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:56.860131  285837 cri.go:89] found id: ""
	I1213 10:08:56.860154  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.860162  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:56.860169  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:56.860223  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:56.884508  285837 cri.go:89] found id: ""
	I1213 10:08:56.884532  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.884540  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:56.884547  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:56.884602  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:56.909197  285837 cri.go:89] found id: ""
	I1213 10:08:56.909223  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.909232  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:56.909238  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:56.909296  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:56.934089  285837 cri.go:89] found id: ""
	I1213 10:08:56.934110  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.934119  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:56.934126  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:56.934183  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:56.958725  285837 cri.go:89] found id: ""
	I1213 10:08:56.958745  285837 logs.go:282] 0 containers: []
	W1213 10:08:56.958754  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:56.958764  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:56.958775  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:08:57.027824  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:08:57.017888    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.018557    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.020435    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.021275    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.023085    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:08:57.017888    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.018557    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.020435    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.021275    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:08:57.023085    4370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:08:57.027846  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:08:57.027859  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:08:57.054139  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:57.054169  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:57.085873  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:57.085903  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:57.144978  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:57.145011  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:59.659171  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:08:59.669569  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:08:59.669639  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:08:59.695058  285837 cri.go:89] found id: ""
	I1213 10:08:59.695123  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.695146  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:08:59.695163  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:08:59.695255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:08:59.720734  285837 cri.go:89] found id: ""
	I1213 10:08:59.720799  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.720822  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:08:59.720840  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:08:59.720935  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:08:59.744586  285837 cri.go:89] found id: ""
	I1213 10:08:59.744661  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.744684  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:08:59.744698  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:08:59.744770  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:08:59.771374  285837 cri.go:89] found id: ""
	I1213 10:08:59.771408  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.771417  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:08:59.771439  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:08:59.771541  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:08:59.799406  285837 cri.go:89] found id: ""
	I1213 10:08:59.799441  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.799450  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:08:59.799473  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:08:59.799577  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:08:59.828067  285837 cri.go:89] found id: ""
	I1213 10:08:59.828142  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.828165  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:08:59.828187  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:08:59.828255  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:08:59.853064  285837 cri.go:89] found id: ""
	I1213 10:08:59.853130  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.853152  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:08:59.853174  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:08:59.853238  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:08:59.881735  285837 cri.go:89] found id: ""
	I1213 10:08:59.881772  285837 logs.go:282] 0 containers: []
	W1213 10:08:59.881781  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:08:59.881790  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:08:59.881820  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:08:59.909551  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:08:59.909578  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:08:59.965746  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:08:59.965781  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:08:59.979378  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:08:59.979407  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:00.187890  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:00.174278    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.176237    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.177484    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.179716    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.181016    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:00.174278    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.176237    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.177484    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.179716    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:00.181016    4501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:00.187915  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:00.187930  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:02.742568  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:02.753251  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:02.753340  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:02.786726  285837 cri.go:89] found id: ""
	I1213 10:09:02.786749  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.786758  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:02.786764  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:02.786823  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:02.819145  285837 cri.go:89] found id: ""
	I1213 10:09:02.819166  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.819174  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:02.819193  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:02.819251  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:02.847100  285837 cri.go:89] found id: ""
	I1213 10:09:02.847124  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.847133  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:02.847139  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:02.847202  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:02.873292  285837 cri.go:89] found id: ""
	I1213 10:09:02.873316  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.873325  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:02.873332  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:02.873388  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:02.897520  285837 cri.go:89] found id: ""
	I1213 10:09:02.897544  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.897553  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:02.897560  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:02.897617  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:02.922393  285837 cri.go:89] found id: ""
	I1213 10:09:02.922416  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.922425  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:02.922431  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:02.922490  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:02.947241  285837 cri.go:89] found id: ""
	I1213 10:09:02.947264  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.947272  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:02.947278  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:02.947335  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:02.972679  285837 cri.go:89] found id: ""
	I1213 10:09:02.972704  285837 logs.go:282] 0 containers: []
	W1213 10:09:02.972713  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:02.972722  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:02.972733  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:03.034867  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:03.034909  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:03.052540  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:03.052570  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:03.128351  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:03.118813    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.119444    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121123    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121418    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.124196    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:03.118813    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.119444    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121123    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.121418    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:03.124196    4610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:03.128373  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:03.128386  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:03.154970  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:03.155008  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:05.683571  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:05.693787  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:05.693854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:05.718259  285837 cri.go:89] found id: ""
	I1213 10:09:05.718282  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.718291  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:05.718297  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:05.718357  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:05.745891  285837 cri.go:89] found id: ""
	I1213 10:09:05.745915  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.745924  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:05.745931  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:05.745987  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:05.782435  285837 cri.go:89] found id: ""
	I1213 10:09:05.782460  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.782469  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:05.782475  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:05.782530  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:05.814908  285837 cri.go:89] found id: ""
	I1213 10:09:05.814951  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.814962  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:05.814969  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:05.815039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:05.841933  285837 cri.go:89] found id: ""
	I1213 10:09:05.841961  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.841971  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:05.841978  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:05.842039  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:05.866012  285837 cri.go:89] found id: ""
	I1213 10:09:05.866041  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.866050  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:05.866056  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:05.866115  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:05.890279  285837 cri.go:89] found id: ""
	I1213 10:09:05.890307  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.890315  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:05.890322  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:05.890379  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:05.915405  285837 cri.go:89] found id: ""
	I1213 10:09:05.915428  285837 logs.go:282] 0 containers: []
	W1213 10:09:05.915436  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:05.915446  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:05.915457  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:05.971454  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:05.971486  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:05.984906  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:05.984951  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:06.083616  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:06.064999    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.075679    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.076130    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.077726    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.078044    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:06.064999    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.075679    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.076130    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.077726    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:06.078044    4721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:06.083701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:06.083737  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:06.114405  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:06.114443  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:08.641977  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:08.652131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:08.652197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:08.675938  285837 cri.go:89] found id: ""
	I1213 10:09:08.675961  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.675970  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:08.675976  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:08.676038  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:08.702206  285837 cri.go:89] found id: ""
	I1213 10:09:08.702281  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.702304  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:08.702321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:08.702400  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:08.726527  285837 cri.go:89] found id: ""
	I1213 10:09:08.726599  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.726621  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:08.726639  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:08.726726  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:08.751396  285837 cri.go:89] found id: ""
	I1213 10:09:08.751469  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.751492  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:08.751555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:08.751631  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:08.787796  285837 cri.go:89] found id: ""
	I1213 10:09:08.787828  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.787838  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:08.787844  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:08.787908  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:08.819599  285837 cri.go:89] found id: ""
	I1213 10:09:08.819634  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.819643  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:08.819650  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:08.819717  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:08.846345  285837 cri.go:89] found id: ""
	I1213 10:09:08.846372  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.846381  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:08.846387  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:08.846445  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:08.870594  285837 cri.go:89] found id: ""
	I1213 10:09:08.870664  285837 logs.go:282] 0 containers: []
	W1213 10:09:08.870710  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:08.870746  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:08.870797  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:08.928780  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:08.928814  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:08.944017  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:08.944043  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:09.014860  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:09.004450    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.005493    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008097    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008783    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.010658    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:09.004450    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.005493    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008097    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.008783    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:09.010658    4835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:09.014883  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:09.014896  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:09.047081  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:09.047174  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:11.588198  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:11.600902  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:11.600973  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:11.629261  285837 cri.go:89] found id: ""
	I1213 10:09:11.629286  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.629295  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:11.629301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:11.629362  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:11.653238  285837 cri.go:89] found id: ""
	I1213 10:09:11.653260  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.653269  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:11.653275  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:11.653332  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:11.681922  285837 cri.go:89] found id: ""
	I1213 10:09:11.681946  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.681956  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:11.681962  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:11.682019  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:11.711733  285837 cri.go:89] found id: ""
	I1213 10:09:11.711762  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.711770  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:11.711776  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:11.711834  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:11.736582  285837 cri.go:89] found id: ""
	I1213 10:09:11.736608  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.736616  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:11.736625  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:11.736681  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:11.759927  285837 cri.go:89] found id: ""
	I1213 10:09:11.759951  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.759961  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:11.759967  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:11.760022  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:11.794760  285837 cri.go:89] found id: ""
	I1213 10:09:11.794787  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.794797  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:11.794803  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:11.794862  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:11.822009  285837 cri.go:89] found id: ""
	I1213 10:09:11.822037  285837 logs.go:282] 0 containers: []
	W1213 10:09:11.822047  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:11.822056  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:11.822068  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:11.889206  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:11.881444    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882052    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882987    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.883619    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.885240    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:11.881444    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882052    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.882987    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.883619    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:11.885240    4943 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:11.889228  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:11.889241  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:11.914544  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:11.914576  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:11.944548  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:11.944576  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:12.000427  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:12.000460  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:14.516876  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:14.527580  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:14.527657  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:14.551881  285837 cri.go:89] found id: ""
	I1213 10:09:14.551903  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.551911  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:14.551917  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:14.551977  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:14.576244  285837 cri.go:89] found id: ""
	I1213 10:09:14.576267  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.576275  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:14.576281  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:14.576337  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:14.604979  285837 cri.go:89] found id: ""
	I1213 10:09:14.605002  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.605011  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:14.605017  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:14.605084  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:14.633024  285837 cri.go:89] found id: ""
	I1213 10:09:14.633050  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.633059  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:14.633065  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:14.633123  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:14.661288  285837 cri.go:89] found id: ""
	I1213 10:09:14.661316  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.661324  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:14.661331  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:14.661390  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:14.686665  285837 cri.go:89] found id: ""
	I1213 10:09:14.686694  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.686704  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:14.686711  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:14.686769  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:14.712111  285837 cri.go:89] found id: ""
	I1213 10:09:14.712139  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.712148  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:14.712156  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:14.712212  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:14.740346  285837 cri.go:89] found id: ""
	I1213 10:09:14.740392  285837 logs.go:282] 0 containers: []
	W1213 10:09:14.740401  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:14.740410  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:14.740423  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:14.753460  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:14.753488  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:14.834789  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:14.826269    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.827206    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.828874    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.829190    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.830588    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:14.826269    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.827206    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.828874    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.829190    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:14.830588    5062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:14.834812  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:14.834824  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:14.859634  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:14.859666  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:14.890753  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:14.890826  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:17.450898  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:17.461075  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:17.461145  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:17.486593  285837 cri.go:89] found id: ""
	I1213 10:09:17.486616  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.486625  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:17.486632  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:17.486689  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:17.511138  285837 cri.go:89] found id: ""
	I1213 10:09:17.511214  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.511230  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:17.511237  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:17.511302  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:17.535780  285837 cri.go:89] found id: ""
	I1213 10:09:17.535808  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.535818  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:17.535824  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:17.535879  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:17.559884  285837 cri.go:89] found id: ""
	I1213 10:09:17.559907  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.559916  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:17.559922  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:17.559983  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:17.588420  285837 cri.go:89] found id: ""
	I1213 10:09:17.588446  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.588456  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:17.588462  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:17.588520  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:17.616357  285837 cri.go:89] found id: ""
	I1213 10:09:17.616427  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.616450  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:17.616470  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:17.616553  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:17.640411  285837 cri.go:89] found id: ""
	I1213 10:09:17.640481  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.640506  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:17.640525  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:17.640606  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:17.670821  285837 cri.go:89] found id: ""
	I1213 10:09:17.670887  285837 logs.go:282] 0 containers: []
	W1213 10:09:17.670910  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:17.670931  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:17.670976  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:17.730483  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:17.730517  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:17.743937  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:17.743965  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:17.835718  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:17.824527    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.825248    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.826849    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.827326    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.828934    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:17.824527    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.825248    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.826849    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.827326    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:17.828934    5178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:17.835789  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:17.835817  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:17.865207  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:17.865241  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:20.392780  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:20.403097  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:20.403162  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:20.428028  285837 cri.go:89] found id: ""
	I1213 10:09:20.428060  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.428069  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:20.428076  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:20.428141  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:20.452273  285837 cri.go:89] found id: ""
	I1213 10:09:20.452297  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.452305  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:20.452312  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:20.452375  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:20.476828  285837 cri.go:89] found id: ""
	I1213 10:09:20.476852  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.476860  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:20.476866  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:20.476922  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:20.500929  285837 cri.go:89] found id: ""
	I1213 10:09:20.500952  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.500968  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:20.500975  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:20.501033  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:20.528180  285837 cri.go:89] found id: ""
	I1213 10:09:20.528207  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.528217  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:20.528223  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:20.528284  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:20.553290  285837 cri.go:89] found id: ""
	I1213 10:09:20.553314  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.553323  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:20.553330  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:20.553386  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:20.577422  285837 cri.go:89] found id: ""
	I1213 10:09:20.577446  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.577455  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:20.577464  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:20.577518  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:20.601597  285837 cri.go:89] found id: ""
	I1213 10:09:20.601623  285837 logs.go:282] 0 containers: []
	W1213 10:09:20.601632  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:20.601643  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:20.601654  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:20.656521  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:20.656556  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:20.669890  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:20.669920  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:20.737784  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:20.729553    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.730242    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.731915    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.732434    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.734060    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:20.729553    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.730242    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.731915    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.732434    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:20.734060    5293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:20.737806  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:20.737818  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:20.762811  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:20.762845  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:23.299625  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:23.311059  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:23.311129  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:23.338174  285837 cri.go:89] found id: ""
	I1213 10:09:23.338197  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.338205  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:23.338211  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:23.338269  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:23.363653  285837 cri.go:89] found id: ""
	I1213 10:09:23.363674  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.363683  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:23.363688  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:23.363750  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:23.387166  285837 cri.go:89] found id: ""
	I1213 10:09:23.387187  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.387195  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:23.387201  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:23.387257  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:23.411627  285837 cri.go:89] found id: ""
	I1213 10:09:23.411650  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.411659  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:23.411665  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:23.411731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:23.440839  285837 cri.go:89] found id: ""
	I1213 10:09:23.440866  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.440885  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:23.440892  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:23.440950  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:23.464835  285837 cri.go:89] found id: ""
	I1213 10:09:23.464857  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.464866  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:23.464872  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:23.464927  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:23.489635  285837 cri.go:89] found id: ""
	I1213 10:09:23.489659  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.489668  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:23.489675  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:23.489762  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:23.513816  285837 cri.go:89] found id: ""
	I1213 10:09:23.513847  285837 logs.go:282] 0 containers: []
	W1213 10:09:23.513855  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:23.513865  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:23.513875  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:23.539139  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:23.539173  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:23.565435  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:23.565463  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:23.622023  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:23.622058  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:23.635231  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:23.635263  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:23.699057  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:23.690976    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.691550    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693329    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693735    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.695223    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:23.690976    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.691550    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693329    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.693735    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:23.695223    5420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:26.200117  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:26.210617  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:26.210696  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:26.235048  285837 cri.go:89] found id: ""
	I1213 10:09:26.235076  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.235085  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:26.235092  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:26.235148  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:26.259259  285837 cri.go:89] found id: ""
	I1213 10:09:26.259285  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.259294  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:26.259300  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:26.259355  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:26.291742  285837 cri.go:89] found id: ""
	I1213 10:09:26.291767  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.291776  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:26.291782  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:26.291864  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:26.320200  285837 cri.go:89] found id: ""
	I1213 10:09:26.320225  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.320234  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:26.320240  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:26.320296  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:26.347996  285837 cri.go:89] found id: ""
	I1213 10:09:26.348023  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.348033  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:26.348039  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:26.348097  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:26.376752  285837 cri.go:89] found id: ""
	I1213 10:09:26.376816  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.376830  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:26.376837  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:26.376893  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:26.404777  285837 cri.go:89] found id: ""
	I1213 10:09:26.404802  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.404811  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:26.404817  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:26.404876  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:26.428882  285837 cri.go:89] found id: ""
	I1213 10:09:26.428904  285837 logs.go:282] 0 containers: []
	W1213 10:09:26.428913  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:26.428922  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:26.428933  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:26.489455  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:26.489494  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:26.504291  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:26.504320  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:26.573661  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:26.564906    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.565725    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567441    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567990    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.569686    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:26.564906    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.565725    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567441    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.567990    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:26.569686    5520 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:26.573684  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:26.573698  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:26.599463  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:26.599496  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:29.127681  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:29.138010  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:29.138081  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:29.161918  285837 cri.go:89] found id: ""
	I1213 10:09:29.161989  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.162013  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:29.162031  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:29.162114  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:29.186603  285837 cri.go:89] found id: ""
	I1213 10:09:29.186678  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.186700  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:29.186717  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:29.186798  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:29.210425  285837 cri.go:89] found id: ""
	I1213 10:09:29.210489  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.210512  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:29.210529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:29.210614  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:29.237345  285837 cri.go:89] found id: ""
	I1213 10:09:29.237369  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.237377  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:29.237384  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:29.237440  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:29.260918  285837 cri.go:89] found id: ""
	I1213 10:09:29.260997  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.261013  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:29.261020  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:29.261075  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:29.289712  285837 cri.go:89] found id: ""
	I1213 10:09:29.289738  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.289747  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:29.289753  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:29.289808  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:29.321797  285837 cri.go:89] found id: ""
	I1213 10:09:29.321821  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.321831  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:29.321839  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:29.321895  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:29.353498  285837 cri.go:89] found id: ""
	I1213 10:09:29.353523  285837 logs.go:282] 0 containers: []
	W1213 10:09:29.353532  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:29.353542  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:29.353582  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:29.415160  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:29.407188    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.407994    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409598    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409900    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.411394    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:29.407188    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.407994    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409598    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.409900    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:29.411394    5629 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:29.415183  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:29.415198  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:29.440924  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:29.440961  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:29.468916  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:29.468944  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:29.528468  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:29.528501  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:32.042457  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:32.054480  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:32.054563  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:32.088256  285837 cri.go:89] found id: ""
	I1213 10:09:32.088282  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.088290  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:32.088296  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:32.088382  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:32.114080  285837 cri.go:89] found id: ""
	I1213 10:09:32.114102  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.114110  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:32.114116  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:32.114195  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:32.138708  285837 cri.go:89] found id: ""
	I1213 10:09:32.138732  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.138740  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:32.138746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:32.138851  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:32.163676  285837 cri.go:89] found id: ""
	I1213 10:09:32.163706  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.163715  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:32.163721  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:32.163780  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:32.188486  285837 cri.go:89] found id: ""
	I1213 10:09:32.188565  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.188582  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:32.188589  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:32.188652  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:32.212912  285837 cri.go:89] found id: ""
	I1213 10:09:32.212936  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.212945  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:32.212951  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:32.213034  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:32.242067  285837 cri.go:89] found id: ""
	I1213 10:09:32.242090  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.242099  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:32.242106  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:32.242163  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:32.280832  285837 cri.go:89] found id: ""
	I1213 10:09:32.280855  285837 logs.go:282] 0 containers: []
	W1213 10:09:32.280864  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:32.280874  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:32.280885  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:32.344925  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:32.344963  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:32.359370  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:32.359400  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:32.425438  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:32.416924    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.417557    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419154    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419748    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.421439    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:32.416924    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.417557    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419154    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.419748    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:32.421439    5745 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:32.425459  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:32.425472  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:32.449956  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:32.449990  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:34.978245  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:34.989159  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:34.989236  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:35.017235  285837 cri.go:89] found id: ""
	I1213 10:09:35.017258  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.017267  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:35.017273  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:35.017341  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:35.050437  285837 cri.go:89] found id: ""
	I1213 10:09:35.050458  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.050467  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:35.050473  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:35.050529  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:35.085905  285837 cri.go:89] found id: ""
	I1213 10:09:35.085926  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.085935  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:35.085941  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:35.085994  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:35.118261  285837 cri.go:89] found id: ""
	I1213 10:09:35.118283  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.118292  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:35.118299  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:35.118360  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:35.144531  285837 cri.go:89] found id: ""
	I1213 10:09:35.144555  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.144563  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:35.144569  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:35.144627  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:35.170241  285837 cri.go:89] found id: ""
	I1213 10:09:35.170317  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.170340  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:35.170359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:35.170433  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:35.195958  285837 cri.go:89] found id: ""
	I1213 10:09:35.195986  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.195995  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:35.196001  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:35.196066  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:35.220509  285837 cri.go:89] found id: ""
	I1213 10:09:35.220535  285837 logs.go:282] 0 containers: []
	W1213 10:09:35.220544  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:35.220553  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:35.220563  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:35.276863  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:35.277042  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:35.294239  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:35.294265  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:35.367085  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:35.358803    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.359461    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361052    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361604    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.363156    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:35.358803    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.359461    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361052    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.361604    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:35.363156    5859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:35.367108  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:35.367121  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:35.392804  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:35.392842  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:37.919692  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:37.929805  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:37.929875  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:37.954708  285837 cri.go:89] found id: ""
	I1213 10:09:37.954782  285837 logs.go:282] 0 containers: []
	W1213 10:09:37.954806  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:37.954825  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:37.954914  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:37.979259  285837 cri.go:89] found id: ""
	I1213 10:09:37.979332  285837 logs.go:282] 0 containers: []
	W1213 10:09:37.979357  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:37.979375  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:37.979459  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:38.008473  285837 cri.go:89] found id: ""
	I1213 10:09:38.008554  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.008579  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:38.008597  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:38.008695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:38.051746  285837 cri.go:89] found id: ""
	I1213 10:09:38.051820  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.051843  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:38.051863  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:38.051957  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:38.082373  285837 cri.go:89] found id: ""
	I1213 10:09:38.082405  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.082413  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:38.082419  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:38.082477  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:38.109623  285837 cri.go:89] found id: ""
	I1213 10:09:38.109646  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.109655  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:38.109661  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:38.109718  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:38.133779  285837 cri.go:89] found id: ""
	I1213 10:09:38.133807  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.133815  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:38.133822  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:38.133892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:38.158199  285837 cri.go:89] found id: ""
	I1213 10:09:38.158263  285837 logs.go:282] 0 containers: []
	W1213 10:09:38.158286  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:38.158338  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:38.158371  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:38.171856  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:38.171885  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:38.237998  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:38.230230    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.230727    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232222    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232668    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.234110    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:38.230230    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.230727    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232222    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.232668    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:38.234110    5963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:38.238021  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:38.238033  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:38.263694  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:38.263729  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:38.301569  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:38.301594  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:40.863927  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:40.874647  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:40.874715  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:40.902898  285837 cri.go:89] found id: ""
	I1213 10:09:40.902922  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.902931  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:40.902939  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:40.903000  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:40.928251  285837 cri.go:89] found id: ""
	I1213 10:09:40.928277  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.928287  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:40.928294  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:40.928350  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:40.952178  285837 cri.go:89] found id: ""
	I1213 10:09:40.952201  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.952210  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:40.952216  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:40.952271  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:40.980522  285837 cri.go:89] found id: ""
	I1213 10:09:40.980548  285837 logs.go:282] 0 containers: []
	W1213 10:09:40.980557  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:40.980564  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:40.980620  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:41.007391  285837 cri.go:89] found id: ""
	I1213 10:09:41.007417  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.007427  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:41.007433  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:41.007498  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:41.056690  285837 cri.go:89] found id: ""
	I1213 10:09:41.056762  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.056786  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:41.056806  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:41.056892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:41.082372  285837 cri.go:89] found id: ""
	I1213 10:09:41.082443  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.082481  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:41.082505  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:41.082592  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:41.106556  285837 cri.go:89] found id: ""
	I1213 10:09:41.106626  285837 logs.go:282] 0 containers: []
	W1213 10:09:41.106648  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:41.106680  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:41.106722  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:41.162248  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:41.162281  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:41.175724  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:41.175753  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:41.243327  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:41.234749    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.235634    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237273    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237827    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.239427    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:41.234749    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.235634    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237273    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.237827    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:41.239427    6076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:41.243393  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:41.243420  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:41.269060  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:41.269142  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:43.812670  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:43.823281  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:43.823360  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:43.846549  285837 cri.go:89] found id: ""
	I1213 10:09:43.846571  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.846579  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:43.846585  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:43.846640  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:43.879456  285837 cri.go:89] found id: ""
	I1213 10:09:43.879541  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.879557  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:43.879563  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:43.879632  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:43.904717  285837 cri.go:89] found id: ""
	I1213 10:09:43.904745  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.904755  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:43.904761  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:43.904818  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:43.929847  285837 cri.go:89] found id: ""
	I1213 10:09:43.929873  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.929883  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:43.929890  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:43.929950  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:43.954073  285837 cri.go:89] found id: ""
	I1213 10:09:43.954146  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.954168  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:43.954187  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:43.954278  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:43.979175  285837 cri.go:89] found id: ""
	I1213 10:09:43.979257  285837 logs.go:282] 0 containers: []
	W1213 10:09:43.979280  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:43.979299  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:43.979406  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:44.013549  285837 cri.go:89] found id: ""
	I1213 10:09:44.013574  285837 logs.go:282] 0 containers: []
	W1213 10:09:44.013584  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:44.013590  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:44.013653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:44.043145  285837 cri.go:89] found id: ""
	I1213 10:09:44.043222  285837 logs.go:282] 0 containers: []
	W1213 10:09:44.043244  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:44.043267  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:44.043306  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:44.058657  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:44.058685  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:44.137763  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:44.128648    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.129161    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.130836    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.131881    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.132537    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:44.128648    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.129161    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.130836    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.131881    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:44.132537    6184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:44.137786  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:44.137799  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:44.163596  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:44.163630  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:44.193981  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:44.194008  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:46.751860  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:46.762578  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:46.762653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:46.787138  285837 cri.go:89] found id: ""
	I1213 10:09:46.787161  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.787170  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:46.787176  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:46.787234  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:46.812348  285837 cri.go:89] found id: ""
	I1213 10:09:46.812371  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.812379  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:46.812386  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:46.812445  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:46.840689  285837 cri.go:89] found id: ""
	I1213 10:09:46.840712  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.840721  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:46.840727  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:46.840784  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:46.870288  285837 cri.go:89] found id: ""
	I1213 10:09:46.870313  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.870322  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:46.870328  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:46.870450  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:46.896231  285837 cri.go:89] found id: ""
	I1213 10:09:46.896255  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.896269  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:46.896276  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:46.896334  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:46.921572  285837 cri.go:89] found id: ""
	I1213 10:09:46.921604  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.921613  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:46.921636  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:46.921721  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:46.948191  285837 cri.go:89] found id: ""
	I1213 10:09:46.948220  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.948229  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:46.948236  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:46.948365  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:46.977518  285837 cri.go:89] found id: ""
	I1213 10:09:46.977585  285837 logs.go:282] 0 containers: []
	W1213 10:09:46.977602  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:46.977612  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:46.977624  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:47.034861  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:47.034901  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:47.049608  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:47.049638  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:47.120624  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:47.111262    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.111986    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.113725    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.114377    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.116185    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:47.111262    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.111986    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.113725    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.114377    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:47.116185    6301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:47.120648  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:47.120662  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:47.146083  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:47.146118  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:49.676188  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:49.688330  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:49.688400  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:49.714933  285837 cri.go:89] found id: ""
	I1213 10:09:49.714958  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.714967  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:49.714973  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:49.715035  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:49.739883  285837 cri.go:89] found id: ""
	I1213 10:09:49.739912  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.739923  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:49.739931  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:49.739990  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:49.768673  285837 cri.go:89] found id: ""
	I1213 10:09:49.768699  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.768718  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:49.768726  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:49.768788  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:49.794628  285837 cri.go:89] found id: ""
	I1213 10:09:49.794694  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.794717  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:49.794735  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:49.794822  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:49.819205  285837 cri.go:89] found id: ""
	I1213 10:09:49.819237  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.819247  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:49.819253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:49.819318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:49.843189  285837 cri.go:89] found id: ""
	I1213 10:09:49.843212  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.843228  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:49.843235  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:49.843303  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:49.867965  285837 cri.go:89] found id: ""
	I1213 10:09:49.867998  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.868008  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:49.868016  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:49.868089  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:49.891561  285837 cri.go:89] found id: ""
	I1213 10:09:49.891586  285837 logs.go:282] 0 containers: []
	W1213 10:09:49.891595  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:49.891605  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:49.891629  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:49.953785  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:49.953824  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:49.967425  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:49.967453  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:50.041318  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:50.031798    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033081    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033890    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035607    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035911    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:50.031798    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033081    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.033890    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035607    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:50.035911    6413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:50.041391  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:50.041419  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:50.070955  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:50.071029  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:52.603479  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:52.615038  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:52.615113  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:52.643538  285837 cri.go:89] found id: ""
	I1213 10:09:52.643561  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.643570  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:52.643577  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:52.643636  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:52.668477  285837 cri.go:89] found id: ""
	I1213 10:09:52.668514  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.668523  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:52.668530  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:52.668586  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:52.695551  285837 cri.go:89] found id: ""
	I1213 10:09:52.695574  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.695582  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:52.695589  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:52.695647  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:52.723965  285837 cri.go:89] found id: ""
	I1213 10:09:52.723991  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.724000  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:52.724007  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:52.724061  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:52.748159  285837 cri.go:89] found id: ""
	I1213 10:09:52.748186  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.748195  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:52.748202  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:52.748257  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:52.771805  285837 cri.go:89] found id: ""
	I1213 10:09:52.771836  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.771846  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:52.771853  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:52.771910  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:52.795549  285837 cri.go:89] found id: ""
	I1213 10:09:52.795574  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.795584  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:52.795590  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:52.795650  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:52.819748  285837 cri.go:89] found id: ""
	I1213 10:09:52.819775  285837 logs.go:282] 0 containers: []
	W1213 10:09:52.819785  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:52.819794  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:52.819805  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:52.882031  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:52.873734    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.874381    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876013    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876486    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.878274    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:52.873734    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.874381    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876013    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.876486    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:52.878274    6519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:52.882051  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:52.882062  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:52.907759  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:52.907795  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:52.934360  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:52.934390  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:52.989946  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:52.989982  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:55.503671  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:55.514125  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:55.514196  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:55.540594  285837 cri.go:89] found id: ""
	I1213 10:09:55.540621  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.540631  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:55.540637  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:55.540694  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:55.570352  285837 cri.go:89] found id: ""
	I1213 10:09:55.570378  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.570387  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:55.570395  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:55.570450  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:55.596509  285837 cri.go:89] found id: ""
	I1213 10:09:55.596533  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.596541  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:55.596547  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:55.596604  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:55.622553  285837 cri.go:89] found id: ""
	I1213 10:09:55.622579  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.622587  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:55.622593  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:55.622650  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:55.647770  285837 cri.go:89] found id: ""
	I1213 10:09:55.647794  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.647803  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:55.647809  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:55.647874  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:55.672615  285837 cri.go:89] found id: ""
	I1213 10:09:55.672679  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.672693  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:55.672701  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:55.672756  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:55.697017  285837 cri.go:89] found id: ""
	I1213 10:09:55.697041  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.697050  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:55.697063  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:55.697123  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:55.720795  285837 cri.go:89] found id: ""
	I1213 10:09:55.720866  285837 logs.go:282] 0 containers: []
	W1213 10:09:55.720891  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:55.720914  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:55.720950  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:55.745823  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:55.745857  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:09:55.774634  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:55.774663  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:55.830064  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:55.830098  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:55.843868  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:55.843896  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:55.905758  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:55.897123    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.897694    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899210    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899757    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.901522    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:55.897123    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.897694    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899210    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.899757    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:55.901522    6651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:58.406072  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:09:58.418120  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:09:58.418199  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:09:58.443021  285837 cri.go:89] found id: ""
	I1213 10:09:58.443050  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.443059  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:09:58.443066  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:09:58.443126  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:09:58.468115  285837 cri.go:89] found id: ""
	I1213 10:09:58.468139  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.468147  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:09:58.468154  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:09:58.468214  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:09:58.496991  285837 cri.go:89] found id: ""
	I1213 10:09:58.497015  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.497025  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:09:58.497032  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:09:58.497098  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:09:58.530053  285837 cri.go:89] found id: ""
	I1213 10:09:58.530076  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.530085  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:09:58.530091  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:09:58.530149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:09:58.561990  285837 cri.go:89] found id: ""
	I1213 10:09:58.562013  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.562022  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:09:58.562028  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:09:58.562091  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:09:58.595912  285837 cri.go:89] found id: ""
	I1213 10:09:58.595984  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.596007  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:09:58.596026  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:09:58.596113  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:09:58.626521  285837 cri.go:89] found id: ""
	I1213 10:09:58.626593  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.626616  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:09:58.626635  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:09:58.626720  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:09:58.655898  285837 cri.go:89] found id: ""
	I1213 10:09:58.655963  285837 logs.go:282] 0 containers: []
	W1213 10:09:58.655987  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:09:58.656008  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:09:58.656032  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:09:58.711709  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:09:58.711741  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:09:58.726942  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:09:58.726969  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:09:58.798293  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:09:58.790256    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.791061    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.792601    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.793006    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.794454    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:09:58.790256    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.791061    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.792601    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.793006    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:09:58.794454    6750 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:09:58.798314  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:09:58.798327  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:09:58.822936  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:09:58.822973  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:01.351670  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:01.362442  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:01.362517  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:01.388700  285837 cri.go:89] found id: ""
	I1213 10:10:01.388734  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.388744  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:01.388751  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:01.388824  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:01.418393  285837 cri.go:89] found id: ""
	I1213 10:10:01.418471  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.418496  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:01.418515  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:01.418602  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:01.449860  285837 cri.go:89] found id: ""
	I1213 10:10:01.449937  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.449962  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:01.449980  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:01.450064  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:01.475973  285837 cri.go:89] found id: ""
	I1213 10:10:01.476035  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.476049  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:01.476056  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:01.476118  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:01.501452  285837 cri.go:89] found id: ""
	I1213 10:10:01.501474  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.501499  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:01.501506  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:01.501576  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:01.527738  285837 cri.go:89] found id: ""
	I1213 10:10:01.527808  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.527832  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:01.527852  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:01.527946  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:01.553256  285837 cri.go:89] found id: ""
	I1213 10:10:01.553280  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.553289  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:01.553296  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:01.553354  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:01.578833  285837 cri.go:89] found id: ""
	I1213 10:10:01.578855  285837 logs.go:282] 0 containers: []
	W1213 10:10:01.578864  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:01.578875  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:01.578892  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:01.634755  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:01.634790  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:01.649799  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:01.649832  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:01.721470  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:01.711679    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.712530    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715342    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715994    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.717520    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:01.711679    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.712530    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715342    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.715994    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:01.717520    6856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:01.721491  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:01.721504  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:01.747322  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:01.747357  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:04.288307  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:04.300683  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:04.300805  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:04.332215  285837 cri.go:89] found id: ""
	I1213 10:10:04.332242  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.332252  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:04.332259  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:04.332318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:04.358136  285837 cri.go:89] found id: ""
	I1213 10:10:04.358164  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.358173  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:04.358180  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:04.358248  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:04.383446  285837 cri.go:89] found id: ""
	I1213 10:10:04.383479  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.383488  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:04.383493  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:04.383578  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:04.408888  285837 cri.go:89] found id: ""
	I1213 10:10:04.408914  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.408923  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:04.408930  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:04.409009  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:04.438109  285837 cri.go:89] found id: ""
	I1213 10:10:04.438145  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.438155  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:04.438163  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:04.438233  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:04.462623  285837 cri.go:89] found id: ""
	I1213 10:10:04.462692  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.462725  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:04.462745  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:04.462826  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:04.488102  285837 cri.go:89] found id: ""
	I1213 10:10:04.488127  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.488137  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:04.488143  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:04.488230  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:04.515038  285837 cri.go:89] found id: ""
	I1213 10:10:04.515078  285837 logs.go:282] 0 containers: []
	W1213 10:10:04.515087  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:04.515096  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:04.515134  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:04.540448  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:04.540483  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:04.570913  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:04.570942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:04.626396  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:04.626430  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:04.639908  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:04.639938  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:04.704410  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:04.696311    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.697396    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.698596    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.699329    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.700060    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:04.696311    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.697396    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.698596    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.699329    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:04.700060    6983 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:07.204629  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:07.215001  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:07.215080  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:07.239145  285837 cri.go:89] found id: ""
	I1213 10:10:07.239170  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.239180  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:07.239186  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:07.239243  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:07.263051  285837 cri.go:89] found id: ""
	I1213 10:10:07.263077  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.263086  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:07.263092  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:07.263149  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:07.293024  285837 cri.go:89] found id: ""
	I1213 10:10:07.293051  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.293060  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:07.293066  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:07.293142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:07.320096  285837 cri.go:89] found id: ""
	I1213 10:10:07.320119  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.320128  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:07.320133  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:07.320189  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:07.349635  285837 cri.go:89] found id: ""
	I1213 10:10:07.349661  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.349670  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:07.349676  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:07.349733  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:07.374644  285837 cri.go:89] found id: ""
	I1213 10:10:07.374720  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.374744  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:07.374767  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:07.374875  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:07.399088  285837 cri.go:89] found id: ""
	I1213 10:10:07.399108  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.399117  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:07.399123  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:07.399179  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:07.423187  285837 cri.go:89] found id: ""
	I1213 10:10:07.423210  285837 logs.go:282] 0 containers: []
	W1213 10:10:07.423219  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:07.423229  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:07.423244  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:07.478648  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:07.478682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:07.492218  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:07.492247  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:07.558077  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:07.550399    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.550906    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552488    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552905    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.554328    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:07.550399    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.550906    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552488    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.552905    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:07.554328    7081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:07.558147  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:07.558168  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:07.583061  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:07.583093  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:10.116593  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:10.127456  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:10.127551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:10.157660  285837 cri.go:89] found id: ""
	I1213 10:10:10.157684  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.157693  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:10.157699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:10.157758  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:10.183132  285837 cri.go:89] found id: ""
	I1213 10:10:10.183166  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.183175  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:10.183181  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:10.183248  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:10.209615  285837 cri.go:89] found id: ""
	I1213 10:10:10.209681  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.209704  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:10.209723  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:10.209817  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:10.234760  285837 cri.go:89] found id: ""
	I1213 10:10:10.234789  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.234798  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:10.234804  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:10.234877  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:10.261577  285837 cri.go:89] found id: ""
	I1213 10:10:10.261608  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.261618  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:10.261624  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:10.261682  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:10.289616  285837 cri.go:89] found id: ""
	I1213 10:10:10.289655  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.289664  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:10.289670  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:10.289742  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:10.316640  285837 cri.go:89] found id: ""
	I1213 10:10:10.316684  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.316693  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:10.316699  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:10.316768  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:10.346038  285837 cri.go:89] found id: ""
	I1213 10:10:10.346065  285837 logs.go:282] 0 containers: []
	W1213 10:10:10.346074  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:10.346084  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:10.346095  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:10.377589  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:10.377669  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:10.435680  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:10.435714  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:10.449198  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:10.449226  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:10.521596  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:10.513247    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.514113    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.515874    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.516173    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.517662    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:10.513247    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.514113    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.515874    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.516173    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:10.517662    7201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:10.521619  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:10.521632  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:13.047644  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:13.059744  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:13.059820  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:13.087860  285837 cri.go:89] found id: ""
	I1213 10:10:13.087901  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.087911  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:13.087918  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:13.087983  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:13.112735  285837 cri.go:89] found id: ""
	I1213 10:10:13.112802  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.112844  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:13.112876  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:13.112953  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:13.141197  285837 cri.go:89] found id: ""
	I1213 10:10:13.141223  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.141244  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:13.141255  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:13.141315  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:13.165043  285837 cri.go:89] found id: ""
	I1213 10:10:13.165119  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.165143  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:13.165155  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:13.165240  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:13.189664  285837 cri.go:89] found id: ""
	I1213 10:10:13.189746  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.189769  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:13.189782  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:13.189854  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:13.213620  285837 cri.go:89] found id: ""
	I1213 10:10:13.213686  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.213709  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:13.213723  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:13.213799  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:13.241644  285837 cri.go:89] found id: ""
	I1213 10:10:13.241667  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.241676  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:13.241728  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:13.241812  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:13.265927  285837 cri.go:89] found id: ""
	I1213 10:10:13.265997  285837 logs.go:282] 0 containers: []
	W1213 10:10:13.266030  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:13.266053  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:13.266079  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:13.293162  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:13.293239  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:13.326250  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:13.326334  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:13.386676  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:13.386710  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:13.400810  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:13.400838  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:13.469704  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:13.461055    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.462337    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.463891    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.464211    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.465542    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:13.461055    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.462337    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.463891    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.464211    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:13.465542    7317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:15.969962  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:15.980347  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:15.980492  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:16.010088  285837 cri.go:89] found id: ""
	I1213 10:10:16.010118  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.010127  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:16.010133  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:16.010196  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:16.049187  285837 cri.go:89] found id: ""
	I1213 10:10:16.049209  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.049217  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:16.049223  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:16.049291  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:16.077965  285837 cri.go:89] found id: ""
	I1213 10:10:16.077987  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.077996  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:16.078002  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:16.078058  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:16.108378  285837 cri.go:89] found id: ""
	I1213 10:10:16.108451  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.108474  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:16.108492  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:16.108577  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:16.134213  285837 cri.go:89] found id: ""
	I1213 10:10:16.134235  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.134244  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:16.134250  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:16.134310  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:16.160222  285837 cri.go:89] found id: ""
	I1213 10:10:16.160255  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.160266  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:16.160273  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:16.160343  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:16.188619  285837 cri.go:89] found id: ""
	I1213 10:10:16.188646  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.188655  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:16.188662  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:16.188725  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:16.213285  285837 cri.go:89] found id: ""
	I1213 10:10:16.213358  285837 logs.go:282] 0 containers: []
	W1213 10:10:16.213375  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:16.213387  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:16.213398  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:16.241893  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:16.241922  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:16.298312  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:16.298349  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:16.312327  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:16.312403  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:16.384024  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:16.375836    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.376424    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.377950    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.378379    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.379873    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:16.375836    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.376424    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.377950    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.378379    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:16.379873    7429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:16.384050  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:16.384064  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:18.909524  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:18.920391  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:18.920459  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:18.945319  285837 cri.go:89] found id: ""
	I1213 10:10:18.945358  285837 logs.go:282] 0 containers: []
	W1213 10:10:18.945367  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:18.945374  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:18.945431  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:18.968360  285837 cri.go:89] found id: ""
	I1213 10:10:18.968381  285837 logs.go:282] 0 containers: []
	W1213 10:10:18.968390  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:18.968420  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:18.968476  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:18.992303  285837 cri.go:89] found id: ""
	I1213 10:10:18.992324  285837 logs.go:282] 0 containers: []
	W1213 10:10:18.992333  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:18.992339  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:18.992393  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:19.017601  285837 cri.go:89] found id: ""
	I1213 10:10:19.017677  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.017700  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:19.017718  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:19.017814  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:19.057563  285837 cri.go:89] found id: ""
	I1213 10:10:19.057636  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.057672  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:19.057695  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:19.057783  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:19.089906  285837 cri.go:89] found id: ""
	I1213 10:10:19.089929  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.089938  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:19.089944  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:19.090014  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:19.115237  285837 cri.go:89] found id: ""
	I1213 10:10:19.115258  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.115266  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:19.115272  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:19.115351  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:19.140000  285837 cri.go:89] found id: ""
	I1213 10:10:19.140067  285837 logs.go:282] 0 containers: []
	W1213 10:10:19.140090  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:19.140112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:19.140150  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:19.201866  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:19.193342    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.193945    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195407    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195879    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.197366    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:19.193342    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.193945    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195407    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.195879    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:19.197366    7524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:19.201888  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:19.201900  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:19.227103  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:19.227135  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:19.253635  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:19.253664  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:19.317211  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:19.317245  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:21.835317  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:21.848786  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:21.848905  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:21.873912  285837 cri.go:89] found id: ""
	I1213 10:10:21.873938  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.873947  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:21.873966  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:21.874030  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:21.898927  285837 cri.go:89] found id: ""
	I1213 10:10:21.898948  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.898957  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:21.898963  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:21.899017  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:21.928040  285837 cri.go:89] found id: ""
	I1213 10:10:21.928067  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.928076  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:21.928083  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:21.928139  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:21.952762  285837 cri.go:89] found id: ""
	I1213 10:10:21.952784  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.952793  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:21.952800  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:21.952862  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:21.977394  285837 cri.go:89] found id: ""
	I1213 10:10:21.977421  285837 logs.go:282] 0 containers: []
	W1213 10:10:21.977430  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:21.977437  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:21.977502  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:22.001693  285837 cri.go:89] found id: ""
	I1213 10:10:22.001729  285837 logs.go:282] 0 containers: []
	W1213 10:10:22.001739  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:22.001746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:22.001813  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:22.044074  285837 cri.go:89] found id: ""
	I1213 10:10:22.044111  285837 logs.go:282] 0 containers: []
	W1213 10:10:22.044120  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:22.044126  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:22.044203  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:22.083324  285837 cri.go:89] found id: ""
	I1213 10:10:22.083361  285837 logs.go:282] 0 containers: []
	W1213 10:10:22.083370  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:22.083380  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:22.083392  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:22.152550  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:22.144496    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.145029    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.146843    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.147160    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.148682    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:22.144496    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.145029    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.146843    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.147160    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:22.148682    7639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:22.152574  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:22.152590  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:22.177867  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:22.177900  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:22.205266  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:22.205296  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:22.260906  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:22.260942  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:24.776001  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:24.787300  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:24.787370  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:24.817822  285837 cri.go:89] found id: ""
	I1213 10:10:24.817967  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.817991  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:24.818032  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:24.818131  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:24.843042  285837 cri.go:89] found id: ""
	I1213 10:10:24.843079  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.843088  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:24.843094  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:24.843160  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:24.866977  285837 cri.go:89] found id: ""
	I1213 10:10:24.867012  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.867022  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:24.867029  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:24.867100  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:24.892141  285837 cri.go:89] found id: ""
	I1213 10:10:24.892167  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.892177  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:24.892183  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:24.892258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:24.922137  285837 cri.go:89] found id: ""
	I1213 10:10:24.922207  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.922230  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:24.922248  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:24.922343  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:24.954689  285837 cri.go:89] found id: ""
	I1213 10:10:24.954720  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.954729  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:24.954736  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:24.954802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:24.979305  285837 cri.go:89] found id: ""
	I1213 10:10:24.979379  285837 logs.go:282] 0 containers: []
	W1213 10:10:24.979400  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:24.979420  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:24.979545  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:25.012112  285837 cri.go:89] found id: ""
	I1213 10:10:25.012139  285837 logs.go:282] 0 containers: []
	W1213 10:10:25.012149  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:25.012163  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:25.012177  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:25.083061  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:25.083100  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:25.100686  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:25.100713  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:25.172319  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:25.162652    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.163028    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166188    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166902    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.168493    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:25.162652    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.163028    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166188    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.166902    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:25.168493    7756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:25.172341  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:25.172354  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:25.198195  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:25.198230  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:27.728458  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:27.739147  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:27.739212  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:27.768935  285837 cri.go:89] found id: ""
	I1213 10:10:27.768964  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.768973  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:27.768980  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:27.769069  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:27.793269  285837 cri.go:89] found id: ""
	I1213 10:10:27.793294  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.793303  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:27.793309  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:27.793381  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:27.819458  285837 cri.go:89] found id: ""
	I1213 10:10:27.819481  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.819490  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:27.819496  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:27.819585  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:27.844796  285837 cri.go:89] found id: ""
	I1213 10:10:27.844819  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.844828  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:27.844834  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:27.844892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:27.873605  285837 cri.go:89] found id: ""
	I1213 10:10:27.873629  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.873638  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:27.873644  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:27.873726  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:27.897452  285837 cri.go:89] found id: ""
	I1213 10:10:27.897476  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.897485  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:27.897491  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:27.897548  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:27.923761  285837 cri.go:89] found id: ""
	I1213 10:10:27.923786  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.923796  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:27.923802  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:27.923880  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:27.952811  285837 cri.go:89] found id: ""
	I1213 10:10:27.952875  285837 logs.go:282] 0 containers: []
	W1213 10:10:27.952907  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:27.952949  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:27.952978  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:27.982369  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:27.982444  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:28.039695  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:28.039739  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:28.059367  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:28.059394  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:28.141898  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:28.133880    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.134265    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.135970    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.136432    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.138063    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:28.133880    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.134265    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.135970    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.136432    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:28.138063    7879 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:28.141920  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:28.141931  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:30.668303  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:30.681191  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:30.681264  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:30.708785  285837 cri.go:89] found id: ""
	I1213 10:10:30.708809  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.708817  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:30.708823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:30.708887  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:30.733895  285837 cri.go:89] found id: ""
	I1213 10:10:30.733918  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.733926  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:30.733932  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:30.733991  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:30.762790  285837 cri.go:89] found id: ""
	I1213 10:10:30.762811  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.762820  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:30.762826  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:30.762891  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:30.786743  285837 cri.go:89] found id: ""
	I1213 10:10:30.786807  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.786829  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:30.786846  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:30.786925  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:30.813249  285837 cri.go:89] found id: ""
	I1213 10:10:30.813272  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.813281  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:30.813288  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:30.813347  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:30.837491  285837 cri.go:89] found id: ""
	I1213 10:10:30.837520  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.837529  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:30.837536  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:30.837596  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:30.862539  285837 cri.go:89] found id: ""
	I1213 10:10:30.862599  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.862622  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:30.862640  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:30.862714  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:30.887350  285837 cri.go:89] found id: ""
	I1213 10:10:30.887371  285837 logs.go:282] 0 containers: []
	W1213 10:10:30.887379  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:30.887388  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:30.887399  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:30.943669  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:30.943701  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:30.957123  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:30.957172  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:31.036468  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:31.016437    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.017222    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.023761    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.024601    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.026097    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:31.016437    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.017222    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.023761    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.024601    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:31.026097    7974 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:31.036496  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:31.036509  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:31.065951  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:31.065987  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:33.600787  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:33.611280  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:33.611352  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:33.640061  285837 cri.go:89] found id: ""
	I1213 10:10:33.640084  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.640093  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:33.640099  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:33.640159  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:33.664736  285837 cri.go:89] found id: ""
	I1213 10:10:33.664763  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.664772  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:33.664780  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:33.664839  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:33.688858  285837 cri.go:89] found id: ""
	I1213 10:10:33.688882  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.688892  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:33.688898  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:33.688955  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:33.719915  285837 cri.go:89] found id: ""
	I1213 10:10:33.719944  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.719953  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:33.719960  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:33.720015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:33.744897  285837 cri.go:89] found id: ""
	I1213 10:10:33.744927  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.744937  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:33.744943  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:33.745037  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:33.773037  285837 cri.go:89] found id: ""
	I1213 10:10:33.773059  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.773067  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:33.773073  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:33.773134  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:33.797407  285837 cri.go:89] found id: ""
	I1213 10:10:33.797433  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.797443  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:33.797449  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:33.797510  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:33.825833  285837 cri.go:89] found id: ""
	I1213 10:10:33.825859  285837 logs.go:282] 0 containers: []
	W1213 10:10:33.825868  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:33.825877  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:33.825889  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:33.851755  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:33.851788  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:33.884360  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:33.884385  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:33.940045  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:33.940080  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:33.954004  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:33.954039  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:34.035282  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:34.013082    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.013966    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.015748    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.016268    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.020357    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:34.013082    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.013966    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.015748    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.016268    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:34.020357    8100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:36.535645  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:36.547382  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:36.547469  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:36.579677  285837 cri.go:89] found id: ""
	I1213 10:10:36.579701  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.579711  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:36.579725  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:36.579802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:36.606029  285837 cri.go:89] found id: ""
	I1213 10:10:36.606058  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.606067  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:36.606073  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:36.606134  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:36.631618  285837 cri.go:89] found id: ""
	I1213 10:10:36.631640  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.631649  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:36.631655  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:36.631712  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:36.656376  285837 cri.go:89] found id: ""
	I1213 10:10:36.656399  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.656407  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:36.656413  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:36.656469  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:36.684348  285837 cri.go:89] found id: ""
	I1213 10:10:36.684369  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.684377  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:36.684383  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:36.684443  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:36.708549  285837 cri.go:89] found id: ""
	I1213 10:10:36.708578  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.708587  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:36.708594  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:36.708653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:36.732630  285837 cri.go:89] found id: ""
	I1213 10:10:36.732659  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.732669  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:36.732677  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:36.732738  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:36.761465  285837 cri.go:89] found id: ""
	I1213 10:10:36.761493  285837 logs.go:282] 0 containers: []
	W1213 10:10:36.761503  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:36.761513  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:36.761524  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:36.774752  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:36.774787  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:36.837540  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:36.829412    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.830021    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.831594    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.832091    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.833636    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:36.829412    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.830021    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.831594    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.832091    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:36.833636    8198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:36.837603  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:36.837625  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:36.862806  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:36.862844  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:36.893277  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:36.893302  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:39.453851  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:39.464513  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:39.464595  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:39.488288  285837 cri.go:89] found id: ""
	I1213 10:10:39.488310  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.488319  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:39.488329  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:39.488386  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:39.513054  285837 cri.go:89] found id: ""
	I1213 10:10:39.513077  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.513085  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:39.513091  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:39.513156  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:39.542442  285837 cri.go:89] found id: ""
	I1213 10:10:39.542465  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.542474  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:39.542480  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:39.542535  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:39.575244  285837 cri.go:89] found id: ""
	I1213 10:10:39.575271  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.575280  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:39.575286  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:39.575341  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:39.605371  285837 cri.go:89] found id: ""
	I1213 10:10:39.605402  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.605411  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:39.605417  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:39.605475  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:39.629581  285837 cri.go:89] found id: ""
	I1213 10:10:39.629608  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.629617  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:39.629624  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:39.629680  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:39.657061  285837 cri.go:89] found id: ""
	I1213 10:10:39.657089  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.657098  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:39.657104  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:39.657162  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:39.680815  285837 cri.go:89] found id: ""
	I1213 10:10:39.680880  285837 logs.go:282] 0 containers: []
	W1213 10:10:39.680894  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:39.680904  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:39.680915  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:39.738790  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:39.738822  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:39.751947  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:39.751976  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:39.816341  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:39.808739    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.809296    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.810748    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.811148    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.812544    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:39.808739    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.809296    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.810748    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.811148    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:39.812544    8310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:39.816364  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:39.816376  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:39.841100  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:39.841132  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:42.369166  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:42.380009  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:42.380075  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:42.411353  285837 cri.go:89] found id: ""
	I1213 10:10:42.411380  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.411390  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:42.411397  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:42.411455  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:42.436688  285837 cri.go:89] found id: ""
	I1213 10:10:42.436718  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.436728  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:42.436734  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:42.436816  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:42.462185  285837 cri.go:89] found id: ""
	I1213 10:10:42.462211  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.462220  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:42.462226  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:42.462285  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:42.487623  285837 cri.go:89] found id: ""
	I1213 10:10:42.487647  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.487657  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:42.487663  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:42.487722  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:42.513508  285837 cri.go:89] found id: ""
	I1213 10:10:42.513534  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.513543  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:42.513549  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:42.513610  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:42.544400  285837 cri.go:89] found id: ""
	I1213 10:10:42.544424  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.544432  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:42.544439  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:42.544498  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:42.571251  285837 cri.go:89] found id: ""
	I1213 10:10:42.571281  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.571290  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:42.571297  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:42.571353  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:42.608069  285837 cri.go:89] found id: ""
	I1213 10:10:42.608094  285837 logs.go:282] 0 containers: []
	W1213 10:10:42.608103  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:42.608113  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:42.608124  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:42.663779  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:42.663815  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:42.677800  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:42.677839  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:42.742889  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:42.733865    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.734680    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736280    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736823    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.738447    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:42.733865    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.734680    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736280    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.736823    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:42.738447    8423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:42.742913  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:42.742927  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:42.769648  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:42.769682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:45.299918  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:45.313054  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:45.313153  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:45.339870  285837 cri.go:89] found id: ""
	I1213 10:10:45.339904  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.339914  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:45.339935  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:45.340013  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:45.364702  285837 cri.go:89] found id: ""
	I1213 10:10:45.364736  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.364746  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:45.364752  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:45.364815  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:45.389159  285837 cri.go:89] found id: ""
	I1213 10:10:45.389189  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.389200  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:45.389206  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:45.389286  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:45.413889  285837 cri.go:89] found id: ""
	I1213 10:10:45.413918  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.413927  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:45.413933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:45.414000  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:45.438849  285837 cri.go:89] found id: ""
	I1213 10:10:45.438885  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.438895  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:45.438901  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:45.438962  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:45.469093  285837 cri.go:89] found id: ""
	I1213 10:10:45.469116  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.469124  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:45.469130  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:45.469233  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:45.493365  285837 cri.go:89] found id: ""
	I1213 10:10:45.493391  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.493401  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:45.493408  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:45.493465  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:45.517810  285837 cri.go:89] found id: ""
	I1213 10:10:45.517839  285837 logs.go:282] 0 containers: []
	W1213 10:10:45.517848  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:45.517858  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:45.517870  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:45.532750  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:45.532781  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:45.610253  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:45.601970    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.602367    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604011    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604678    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.606346    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:45.601970    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.602367    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604011    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.604678    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:45.606346    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:45.610276  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:45.610289  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:45.635170  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:45.635201  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:45.662649  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:45.662727  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:48.218853  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:48.230454  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:48.230539  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:48.256210  285837 cri.go:89] found id: ""
	I1213 10:10:48.256235  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.256244  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:48.256250  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:48.256311  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:48.288857  285837 cri.go:89] found id: ""
	I1213 10:10:48.288882  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.288891  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:48.288897  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:48.288952  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:48.317960  285837 cri.go:89] found id: ""
	I1213 10:10:48.317994  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.318020  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:48.318034  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:48.318108  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:48.347646  285837 cri.go:89] found id: ""
	I1213 10:10:48.347724  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.347738  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:48.347746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:48.347815  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:48.372818  285837 cri.go:89] found id: ""
	I1213 10:10:48.372840  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.372849  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:48.372855  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:48.372915  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:48.400208  285837 cri.go:89] found id: ""
	I1213 10:10:48.400281  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.400296  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:48.400304  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:48.400373  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:48.424245  285837 cri.go:89] found id: ""
	I1213 10:10:48.424272  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.424282  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:48.424287  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:48.424345  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:48.450041  285837 cri.go:89] found id: ""
	I1213 10:10:48.450074  285837 logs.go:282] 0 containers: []
	W1213 10:10:48.450083  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:48.450092  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:48.450103  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:48.516704  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:48.507097    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.507702    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509433    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509994    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.511703    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:48.507097    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.507702    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509433    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.509994    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:48.511703    8639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:48.516726  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:48.516739  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:48.544227  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:48.544262  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:48.581036  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:48.581067  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:48.643405  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:48.643440  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:51.157408  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:51.168232  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:51.168298  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:51.194497  285837 cri.go:89] found id: ""
	I1213 10:10:51.194533  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.194545  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:51.194552  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:51.194619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:51.219079  285837 cri.go:89] found id: ""
	I1213 10:10:51.219099  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.219107  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:51.219112  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:51.219167  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:51.244709  285837 cri.go:89] found id: ""
	I1213 10:10:51.244732  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.244740  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:51.244747  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:51.244806  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:51.284617  285837 cri.go:89] found id: ""
	I1213 10:10:51.284643  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.284651  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:51.284657  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:51.284713  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:51.314124  285837 cri.go:89] found id: ""
	I1213 10:10:51.314152  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.314162  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:51.314170  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:51.314228  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:51.346119  285837 cri.go:89] found id: ""
	I1213 10:10:51.346144  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.346153  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:51.346160  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:51.346218  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:51.371813  285837 cri.go:89] found id: ""
	I1213 10:10:51.371841  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.371850  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:51.371861  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:51.371918  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:51.397126  285837 cri.go:89] found id: ""
	I1213 10:10:51.397150  285837 logs.go:282] 0 containers: []
	W1213 10:10:51.397159  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:51.397174  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:51.397216  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:51.426866  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:51.426894  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:51.483164  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:51.483196  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:51.497003  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:51.497028  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:51.582114  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:51.573287    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.574073    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.575716    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.576298    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.577879    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:51.573287    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.574073    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.575716    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.576298    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:51.577879    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:51.582138  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:51.582151  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:54.110647  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:54.121581  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:54.121653  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:54.145568  285837 cri.go:89] found id: ""
	I1213 10:10:54.145591  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.145600  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:54.145606  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:54.145667  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:54.171162  285837 cri.go:89] found id: ""
	I1213 10:10:54.171186  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.171195  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:54.171202  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:54.171258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:54.196117  285837 cri.go:89] found id: ""
	I1213 10:10:54.196140  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.196148  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:54.196154  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:54.196211  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:54.221183  285837 cri.go:89] found id: ""
	I1213 10:10:54.221226  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.221236  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:54.221243  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:54.221300  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:54.246527  285837 cri.go:89] found id: ""
	I1213 10:10:54.246569  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.246578  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:54.246585  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:54.246648  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:54.273839  285837 cri.go:89] found id: ""
	I1213 10:10:54.273866  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.273875  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:54.273881  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:54.273936  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:54.305443  285837 cri.go:89] found id: ""
	I1213 10:10:54.305468  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.305477  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:54.305483  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:54.305566  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:54.337568  285837 cri.go:89] found id: ""
	I1213 10:10:54.337634  285837 logs.go:282] 0 containers: []
	W1213 10:10:54.337649  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:54.337659  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:54.337671  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:54.394420  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:54.394456  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:54.408137  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:54.408167  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:54.476257  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:54.467629    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.468321    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470154    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470801    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.472389    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:54.467629    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.468321    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470154    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.470801    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:54.472389    8865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:54.476279  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:54.476294  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:54.501779  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:54.501818  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:57.039708  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:57.051575  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:57.051656  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:10:57.077147  285837 cri.go:89] found id: ""
	I1213 10:10:57.077171  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.077180  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:10:57.077186  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:10:57.077249  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:10:57.100638  285837 cri.go:89] found id: ""
	I1213 10:10:57.100662  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.100672  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:10:57.100679  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:10:57.100736  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:10:57.124849  285837 cri.go:89] found id: ""
	I1213 10:10:57.124872  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.124880  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:10:57.124886  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:10:57.124942  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:10:57.149947  285837 cri.go:89] found id: ""
	I1213 10:10:57.149970  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.149979  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:10:57.149985  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:10:57.150041  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:10:57.177921  285837 cri.go:89] found id: ""
	I1213 10:10:57.177944  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.177952  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:10:57.177958  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:10:57.178015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:10:57.202761  285837 cri.go:89] found id: ""
	I1213 10:10:57.202785  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.202793  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:10:57.202799  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:10:57.202861  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:10:57.232853  285837 cri.go:89] found id: ""
	I1213 10:10:57.232880  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.232890  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:10:57.232896  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:10:57.232958  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:10:57.257698  285837 cri.go:89] found id: ""
	I1213 10:10:57.257725  285837 logs.go:282] 0 containers: []
	W1213 10:10:57.257734  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:10:57.257744  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:10:57.257754  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:10:57.284012  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:10:57.284084  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:10:57.318707  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:10:57.318744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:10:57.380534  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:10:57.380571  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:10:57.394671  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:10:57.394704  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:10:57.463198  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:10:57.453865    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.454679    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.456385    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.457080    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.458704    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:10:57.453865    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.454679    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.456385    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.457080    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:10:57.458704    8992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:10:59.963429  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:10:59.974005  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:10:59.974074  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:00.002819  285837 cri.go:89] found id: ""
	I1213 10:11:00.002842  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.002853  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:00.002860  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:00.002927  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:00.094025  285837 cri.go:89] found id: ""
	I1213 10:11:00.094053  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.094064  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:00.094071  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:00.094142  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:00.174313  285837 cri.go:89] found id: ""
	I1213 10:11:00.174336  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.174345  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:00.174352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:00.174417  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:00.249900  285837 cri.go:89] found id: ""
	I1213 10:11:00.249939  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.249949  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:00.249968  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:00.250053  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:00.326093  285837 cri.go:89] found id: ""
	I1213 10:11:00.326121  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.326130  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:00.326138  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:00.326207  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:00.398659  285837 cri.go:89] found id: ""
	I1213 10:11:00.398685  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.398695  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:00.398702  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:00.398771  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:00.438080  285837 cri.go:89] found id: ""
	I1213 10:11:00.438106  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.438116  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:00.438123  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:00.438200  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:00.466610  285837 cri.go:89] found id: ""
	I1213 10:11:00.466635  285837 logs.go:282] 0 containers: []
	W1213 10:11:00.466644  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:00.466655  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:00.466668  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:00.524796  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:00.524832  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:00.541430  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:00.541461  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:00.620210  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:00.611464    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.612064    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.613626    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.614181    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.615780    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:00.611464    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.612064    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.613626    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.614181    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:00.615780    9088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:00.620234  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:00.620248  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:00.646443  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:00.646481  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:03.175597  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:03.187100  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:03.187169  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:03.213075  285837 cri.go:89] found id: ""
	I1213 10:11:03.213099  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.213108  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:03.213114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:03.213173  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:03.238387  285837 cri.go:89] found id: ""
	I1213 10:11:03.238413  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.238422  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:03.238428  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:03.238485  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:03.263021  285837 cri.go:89] found id: ""
	I1213 10:11:03.263047  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.263057  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:03.263064  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:03.263120  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:03.287967  285837 cri.go:89] found id: ""
	I1213 10:11:03.287990  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.287999  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:03.288005  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:03.288070  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:03.313649  285837 cri.go:89] found id: ""
	I1213 10:11:03.313676  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.313685  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:03.313691  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:03.313782  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:03.341329  285837 cri.go:89] found id: ""
	I1213 10:11:03.341395  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.341410  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:03.341418  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:03.341480  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:03.367350  285837 cri.go:89] found id: ""
	I1213 10:11:03.367376  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.367386  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:03.367392  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:03.367450  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:03.394523  285837 cri.go:89] found id: ""
	I1213 10:11:03.394548  285837 logs.go:282] 0 containers: []
	W1213 10:11:03.394556  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:03.394566  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:03.394579  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:03.408418  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:03.408444  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:03.481932  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:03.473186    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.474065    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.475730    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.476279    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.477971    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:03.473186    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.474065    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.475730    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.476279    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:03.477971    9198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:03.481953  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:03.481965  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:03.508165  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:03.508197  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:03.564104  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:03.564135  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:06.137748  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:06.148529  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:06.148601  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:06.173118  285837 cri.go:89] found id: ""
	I1213 10:11:06.173142  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.173151  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:06.173164  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:06.173225  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:06.198710  285837 cri.go:89] found id: ""
	I1213 10:11:06.198732  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.198741  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:06.198747  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:06.198802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:06.224139  285837 cri.go:89] found id: ""
	I1213 10:11:06.224163  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.224171  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:06.224183  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:06.224246  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:06.249528  285837 cri.go:89] found id: ""
	I1213 10:11:06.249553  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.249568  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:06.249577  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:06.249636  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:06.283856  285837 cri.go:89] found id: ""
	I1213 10:11:06.283886  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.283894  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:06.283901  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:06.283964  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:06.307922  285837 cri.go:89] found id: ""
	I1213 10:11:06.307947  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.307956  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:06.307963  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:06.308020  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:06.332705  285837 cri.go:89] found id: ""
	I1213 10:11:06.332731  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.332739  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:06.332746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:06.332805  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:06.358646  285837 cri.go:89] found id: ""
	I1213 10:11:06.358672  285837 logs.go:282] 0 containers: []
	W1213 10:11:06.358681  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:06.358691  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:06.358702  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:06.414726  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:06.414763  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:06.428830  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:06.428866  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:06.495345  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:06.486296    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.486912    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.488721    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.489058    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.490614    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:06.486296    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.486912    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.488721    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.489058    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:06.490614    9314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:06.495373  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:06.495386  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:06.523314  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:06.523359  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:09.076696  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:09.087477  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:09.087569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:09.111658  285837 cri.go:89] found id: ""
	I1213 10:11:09.111681  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.111690  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:09.111696  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:09.111759  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:09.135775  285837 cri.go:89] found id: ""
	I1213 10:11:09.135801  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.135809  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:09.135816  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:09.135872  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:09.165477  285837 cri.go:89] found id: ""
	I1213 10:11:09.165500  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.165514  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:09.165520  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:09.165576  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:09.194399  285837 cri.go:89] found id: ""
	I1213 10:11:09.194421  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.194437  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:09.194446  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:09.194503  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:09.223486  285837 cri.go:89] found id: ""
	I1213 10:11:09.223508  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.223537  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:09.223544  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:09.223603  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:09.252819  285837 cri.go:89] found id: ""
	I1213 10:11:09.252842  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.252851  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:09.252857  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:09.252916  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:09.277570  285837 cri.go:89] found id: ""
	I1213 10:11:09.277641  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.277656  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:09.277666  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:09.277729  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:09.302629  285837 cri.go:89] found id: ""
	I1213 10:11:09.302652  285837 logs.go:282] 0 containers: []
	W1213 10:11:09.302661  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:09.302671  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:09.302682  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:09.358773  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:09.358811  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:09.372815  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:09.372842  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:09.441717  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:09.432460    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.433025    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.434726    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.435463    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.437132    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:09.432460    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.433025    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.434726    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.435463    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:09.437132    9425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:09.441793  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:09.441822  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:09.466485  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:09.466517  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:11.993817  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:12.018615  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:12.018690  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:12.044911  285837 cri.go:89] found id: ""
	I1213 10:11:12.044934  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.044943  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:12.044949  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:12.045013  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:12.069918  285837 cri.go:89] found id: ""
	I1213 10:11:12.069940  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.069949  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:12.069955  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:12.070018  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:12.094440  285837 cri.go:89] found id: ""
	I1213 10:11:12.094461  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.094470  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:12.094476  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:12.094530  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:12.118079  285837 cri.go:89] found id: ""
	I1213 10:11:12.118099  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.118108  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:12.118114  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:12.118169  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:12.145090  285837 cri.go:89] found id: ""
	I1213 10:11:12.145115  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.145125  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:12.145131  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:12.145186  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:12.168654  285837 cri.go:89] found id: ""
	I1213 10:11:12.168725  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.168749  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:12.168762  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:12.168820  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:12.192603  285837 cri.go:89] found id: ""
	I1213 10:11:12.192677  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.192704  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:12.192726  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:12.192802  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:12.216389  285837 cri.go:89] found id: ""
	I1213 10:11:12.216454  285837 logs.go:282] 0 containers: []
	W1213 10:11:12.216478  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:12.216501  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:12.216517  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:12.273281  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:12.273315  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:12.286866  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:12.286903  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:12.353852  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:12.345393    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.345982    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.347576    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.348061    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.349757    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:12.345393    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.345982    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.347576    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.348061    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:12.349757    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:12.353884  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:12.353914  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:12.379896  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:12.379931  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:14.910354  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:14.920854  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:14.920922  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:14.946408  285837 cri.go:89] found id: ""
	I1213 10:11:14.946430  285837 logs.go:282] 0 containers: []
	W1213 10:11:14.946439  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:14.946446  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:14.946501  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:14.977293  285837 cri.go:89] found id: ""
	I1213 10:11:14.977322  285837 logs.go:282] 0 containers: []
	W1213 10:11:14.977337  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:14.977343  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:14.977414  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:15.010967  285837 cri.go:89] found id: ""
	I1213 10:11:15.011055  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.011079  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:15.011098  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:15.011201  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:15.050270  285837 cri.go:89] found id: ""
	I1213 10:11:15.050294  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.050314  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:15.050321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:15.050387  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:15.076902  285837 cri.go:89] found id: ""
	I1213 10:11:15.076927  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.076936  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:15.076943  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:15.077003  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:15.106349  285837 cri.go:89] found id: ""
	I1213 10:11:15.106379  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.106389  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:15.106395  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:15.106458  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:15.134472  285837 cri.go:89] found id: ""
	I1213 10:11:15.134497  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.134506  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:15.134512  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:15.134569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:15.161713  285837 cri.go:89] found id: ""
	I1213 10:11:15.161740  285837 logs.go:282] 0 containers: []
	W1213 10:11:15.161750  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:15.161759  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:15.161773  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:15.217480  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:15.217512  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:15.231189  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:15.231217  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:15.304481  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:15.289570    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.290198    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298086    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298782    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.300522    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:15.289570    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.290198    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298086    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.298782    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:15.300522    9656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:15.304502  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:15.304515  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:15.329819  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:15.329853  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:17.857044  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:17.868755  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:17.868830  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:17.892866  285837 cri.go:89] found id: ""
	I1213 10:11:17.892890  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.892900  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:17.892906  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:17.892969  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:17.918428  285837 cri.go:89] found id: ""
	I1213 10:11:17.918450  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.918459  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:17.918467  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:17.918520  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:17.941924  285837 cri.go:89] found id: ""
	I1213 10:11:17.941945  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.941953  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:17.941959  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:17.942015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:17.966130  285837 cri.go:89] found id: ""
	I1213 10:11:17.966153  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.966162  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:17.966168  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:17.966266  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:17.994412  285837 cri.go:89] found id: ""
	I1213 10:11:17.994437  285837 logs.go:282] 0 containers: []
	W1213 10:11:17.994446  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:17.994452  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:17.994509  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:18.020369  285837 cri.go:89] found id: ""
	I1213 10:11:18.020392  285837 logs.go:282] 0 containers: []
	W1213 10:11:18.020401  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:18.020407  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:18.020485  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:18.047590  285837 cri.go:89] found id: ""
	I1213 10:11:18.047614  285837 logs.go:282] 0 containers: []
	W1213 10:11:18.047623  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:18.047629  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:18.047689  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:18.074433  285837 cri.go:89] found id: ""
	I1213 10:11:18.074456  285837 logs.go:282] 0 containers: []
	W1213 10:11:18.074465  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:18.074475  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:18.074487  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:18.101094  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:18.101129  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:18.129666  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:18.129695  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:18.185620  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:18.185652  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:18.199477  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:18.199503  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:18.264408  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:18.255885    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.256623    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258349    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258851    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.260403    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:18.255885    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.256623    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258349    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.258851    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:18.260403    9783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:20.765401  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:20.778692  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:20.778759  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:20.856776  285837 cri.go:89] found id: ""
	I1213 10:11:20.856798  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.856807  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:20.856813  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:20.856871  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:20.886867  285837 cri.go:89] found id: ""
	I1213 10:11:20.886896  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.886912  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:20.886918  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:20.886992  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:20.915220  285837 cri.go:89] found id: ""
	I1213 10:11:20.915245  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.915254  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:20.915260  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:20.915318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:20.939562  285837 cri.go:89] found id: ""
	I1213 10:11:20.939585  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.939594  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:20.939600  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:20.939667  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:20.964172  285837 cri.go:89] found id: ""
	I1213 10:11:20.964195  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.964204  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:20.964210  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:20.964269  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:20.989184  285837 cri.go:89] found id: ""
	I1213 10:11:20.989206  285837 logs.go:282] 0 containers: []
	W1213 10:11:20.989215  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:20.989221  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:20.989287  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:21.015584  285837 cri.go:89] found id: ""
	I1213 10:11:21.015608  285837 logs.go:282] 0 containers: []
	W1213 10:11:21.015616  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:21.015623  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:21.015692  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:21.041789  285837 cri.go:89] found id: ""
	I1213 10:11:21.041812  285837 logs.go:282] 0 containers: []
	W1213 10:11:21.041820  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:21.041829  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:21.041842  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:21.055424  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:21.055450  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:21.119438  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:21.111338    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.112051    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.113609    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.114084    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.115730    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:21.111338    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.112051    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.113609    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.114084    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:21.115730    9886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:21.119456  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:21.119469  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:21.144678  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:21.144713  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:21.177284  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:21.177313  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:23.742410  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:23.752527  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:23.752601  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:23.812957  285837 cri.go:89] found id: ""
	I1213 10:11:23.812979  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.812987  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:23.812994  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:23.813052  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:23.858208  285837 cri.go:89] found id: ""
	I1213 10:11:23.858236  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.858246  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:23.858253  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:23.858315  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:23.885293  285837 cri.go:89] found id: ""
	I1213 10:11:23.885318  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.885328  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:23.885334  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:23.885396  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:23.911374  285837 cri.go:89] found id: ""
	I1213 10:11:23.911399  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.911409  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:23.911541  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:23.911621  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:23.940586  285837 cri.go:89] found id: ""
	I1213 10:11:23.940611  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.940620  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:23.940625  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:23.940683  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:23.965387  285837 cri.go:89] found id: ""
	I1213 10:11:23.965413  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.965423  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:23.965430  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:23.965491  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:23.989910  285837 cri.go:89] found id: ""
	I1213 10:11:23.989936  285837 logs.go:282] 0 containers: []
	W1213 10:11:23.989945  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:23.989952  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:23.990009  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:24.016511  285837 cri.go:89] found id: ""
	I1213 10:11:24.016539  285837 logs.go:282] 0 containers: []
	W1213 10:11:24.016548  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:24.016558  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:24.016569  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:24.076500  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:24.076542  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:24.090891  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:24.090920  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:24.158444  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:24.150405    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.151121    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.152642    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.153107    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.154567    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:24.150405    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.151121    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.152642    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.153107    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:24.154567    9999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:24.158466  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:24.158478  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:24.184352  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:24.184389  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:26.715866  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:26.726291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:26.726358  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:26.749726  285837 cri.go:89] found id: ""
	I1213 10:11:26.749748  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.749757  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:26.749763  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:26.749820  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:26.798311  285837 cri.go:89] found id: ""
	I1213 10:11:26.798333  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.798341  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:26.798347  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:26.798403  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:26.855482  285837 cri.go:89] found id: ""
	I1213 10:11:26.855506  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.855541  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:26.855548  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:26.855606  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:26.887763  285837 cri.go:89] found id: ""
	I1213 10:11:26.887833  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.887857  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:26.887876  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:26.887963  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:26.913160  285837 cri.go:89] found id: ""
	I1213 10:11:26.913183  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.913192  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:26.913199  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:26.913266  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:26.940887  285837 cri.go:89] found id: ""
	I1213 10:11:26.940965  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.940996  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:26.941004  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:26.941070  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:26.965212  285837 cri.go:89] found id: ""
	I1213 10:11:26.965233  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.965242  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:26.965248  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:26.965313  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:26.989687  285837 cri.go:89] found id: ""
	I1213 10:11:26.989710  285837 logs.go:282] 0 containers: []
	W1213 10:11:26.989718  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:26.989733  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:26.989744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:27.020130  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:27.020156  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:27.075963  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:27.076001  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:27.089421  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:27.089452  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:27.154208  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:27.146008   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.146794   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148444   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148744   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.150216   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:27.146008   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.146794   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148444   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.148744   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:27.150216   10127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:27.154231  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:27.154243  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:29.679077  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:29.689987  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:29.690113  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:29.719241  285837 cri.go:89] found id: ""
	I1213 10:11:29.719304  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.719318  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:29.719325  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:29.719382  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:29.745413  285837 cri.go:89] found id: ""
	I1213 10:11:29.745511  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.745533  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:29.745541  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:29.745624  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:29.789119  285837 cri.go:89] found id: ""
	I1213 10:11:29.789193  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.789228  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:29.789251  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:29.789362  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:29.865327  285837 cri.go:89] found id: ""
	I1213 10:11:29.865413  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.865429  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:29.865437  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:29.865495  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:29.890183  285837 cri.go:89] found id: ""
	I1213 10:11:29.890260  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.890283  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:29.890301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:29.890397  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:29.919549  285837 cri.go:89] found id: ""
	I1213 10:11:29.919622  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.919646  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:29.919666  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:29.919771  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:29.945219  285837 cri.go:89] found id: ""
	I1213 10:11:29.945248  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.945257  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:29.945264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:29.945364  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:29.973791  285837 cri.go:89] found id: ""
	I1213 10:11:29.973822  285837 logs.go:282] 0 containers: []
	W1213 10:11:29.973832  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:29.973842  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:29.973870  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:30.030470  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:30.030512  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:30.047458  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:30.047559  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:30.123116  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:30.112772   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.113344   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.115682   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.116490   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.118243   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:30.112772   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.113344   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.115682   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.116490   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:30.118243   10227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:30.123215  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:30.123250  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:30.149652  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:30.149689  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:32.679599  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:32.690298  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:32.690372  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:32.713694  285837 cri.go:89] found id: ""
	I1213 10:11:32.713718  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.713726  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:32.713733  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:32.713790  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:32.738621  285837 cri.go:89] found id: ""
	I1213 10:11:32.738645  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.738654  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:32.738660  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:32.738720  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:32.762830  285837 cri.go:89] found id: ""
	I1213 10:11:32.762855  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.762865  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:32.762871  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:32.762928  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:32.799422  285837 cri.go:89] found id: ""
	I1213 10:11:32.799448  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.799464  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:32.799471  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:32.799543  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:32.856726  285837 cri.go:89] found id: ""
	I1213 10:11:32.856759  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.856768  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:32.856775  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:32.856839  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:32.883319  285837 cri.go:89] found id: ""
	I1213 10:11:32.883346  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.883356  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:32.883362  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:32.883422  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:32.909028  285837 cri.go:89] found id: ""
	I1213 10:11:32.909054  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.909063  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:32.909070  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:32.909127  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:32.938657  285837 cri.go:89] found id: ""
	I1213 10:11:32.938691  285837 logs.go:282] 0 containers: []
	W1213 10:11:32.938701  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:32.938710  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:32.938721  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:32.994400  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:32.994434  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:33.008614  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:33.008653  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:33.076509  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:33.069025   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.069462   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.070984   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.071296   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.072695   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:33.069025   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.069462   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.070984   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.071296   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:33.072695   10343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:33.076539  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:33.076553  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:33.101599  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:33.101631  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:35.629072  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:35.639660  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:35.639731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:35.664032  285837 cri.go:89] found id: ""
	I1213 10:11:35.664060  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.664068  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:35.664076  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:35.664130  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:35.692081  285837 cri.go:89] found id: ""
	I1213 10:11:35.692108  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.692118  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:35.692124  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:35.692180  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:35.717152  285837 cri.go:89] found id: ""
	I1213 10:11:35.717177  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.717186  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:35.717192  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:35.717251  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:35.741898  285837 cri.go:89] found id: ""
	I1213 10:11:35.741931  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.741940  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:35.741946  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:35.742013  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:35.766255  285837 cri.go:89] found id: ""
	I1213 10:11:35.766289  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.766298  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:35.766305  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:35.766370  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:35.829052  285837 cri.go:89] found id: ""
	I1213 10:11:35.829093  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.829104  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:35.829111  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:35.829189  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:35.872000  285837 cri.go:89] found id: ""
	I1213 10:11:35.872072  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.872085  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:35.872092  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:35.872162  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:35.897842  285837 cri.go:89] found id: ""
	I1213 10:11:35.897874  285837 logs.go:282] 0 containers: []
	W1213 10:11:35.897883  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:35.897893  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:35.897911  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:35.955605  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:35.955640  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:35.969234  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:35.969262  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:36.035000  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:36.026108   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.026822   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.028473   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.029109   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.030769   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:36.026108   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.026822   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.028473   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.029109   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:36.030769   10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:36.035063  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:36.035083  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:36.061000  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:36.061037  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:38.589308  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:38.599753  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:38.599818  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:38.623379  285837 cri.go:89] found id: ""
	I1213 10:11:38.623400  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.623409  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:38.623418  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:38.623476  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:38.649806  285837 cri.go:89] found id: ""
	I1213 10:11:38.649830  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.649840  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:38.649847  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:38.649908  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:38.674234  285837 cri.go:89] found id: ""
	I1213 10:11:38.674257  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.674266  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:38.674272  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:38.674334  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:38.698759  285837 cri.go:89] found id: ""
	I1213 10:11:38.698780  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.698789  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:38.698795  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:38.698851  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:38.725178  285837 cri.go:89] found id: ""
	I1213 10:11:38.725205  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.725215  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:38.725222  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:38.725281  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:38.766167  285837 cri.go:89] found id: ""
	I1213 10:11:38.766194  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.766204  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:38.766210  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:38.766265  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:38.808982  285837 cri.go:89] found id: ""
	I1213 10:11:38.809009  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.809017  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:38.809023  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:38.809080  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:38.870538  285837 cri.go:89] found id: ""
	I1213 10:11:38.870560  285837 logs.go:282] 0 containers: []
	W1213 10:11:38.870568  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:38.870578  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:38.870589  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:38.928916  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:38.928958  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:38.943274  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:38.943304  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:39.011182  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:38.999196   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:38.999883   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.001952   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.004117   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.005416   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:38.999196   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:38.999883   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.001952   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.004117   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:39.005416   10567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:39.011208  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:39.011223  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:39.038343  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:39.038377  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:41.571555  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:41.582245  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:41.582319  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:41.609447  285837 cri.go:89] found id: ""
	I1213 10:11:41.609473  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.609483  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:41.609490  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:41.609546  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:41.637801  285837 cri.go:89] found id: ""
	I1213 10:11:41.637823  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.637832  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:41.637838  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:41.637901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:41.661762  285837 cri.go:89] found id: ""
	I1213 10:11:41.661786  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.661795  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:41.661801  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:41.661865  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:41.685944  285837 cri.go:89] found id: ""
	I1213 10:11:41.685966  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.685981  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:41.685987  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:41.686044  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:41.710847  285837 cri.go:89] found id: ""
	I1213 10:11:41.710874  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.710883  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:41.710889  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:41.710947  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:41.739921  285837 cri.go:89] found id: ""
	I1213 10:11:41.739947  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.739956  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:41.739962  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:41.740021  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:41.764216  285837 cri.go:89] found id: ""
	I1213 10:11:41.764245  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.764254  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:41.764260  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:41.764318  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:41.822929  285837 cri.go:89] found id: ""
	I1213 10:11:41.822960  285837 logs.go:282] 0 containers: []
	W1213 10:11:41.822969  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:41.822995  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:41.823012  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:41.860056  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:41.860087  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:41.916192  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:41.916225  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:41.932977  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:41.933051  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:41.996358  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:41.988135   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.988678   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990296   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990680   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.992332   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:41.988135   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.988678   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990296   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.990680   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:41.992332   10689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:41.996420  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:41.996436  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:44.525380  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:44.536068  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:44.536183  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:44.561444  285837 cri.go:89] found id: ""
	I1213 10:11:44.561476  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.561485  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:44.561491  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:44.561552  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:44.586945  285837 cri.go:89] found id: ""
	I1213 10:11:44.586975  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.586985  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:44.586991  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:44.587057  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:44.612842  285837 cri.go:89] found id: ""
	I1213 10:11:44.612874  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.612885  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:44.612891  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:44.612949  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:44.638444  285837 cri.go:89] found id: ""
	I1213 10:11:44.638472  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.638482  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:44.638489  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:44.638547  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:44.664168  285837 cri.go:89] found id: ""
	I1213 10:11:44.664191  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.664200  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:44.664206  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:44.664264  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:44.693563  285837 cri.go:89] found id: ""
	I1213 10:11:44.693634  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.693659  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:44.693675  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:44.693748  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:44.719349  285837 cri.go:89] found id: ""
	I1213 10:11:44.719376  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.719385  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:44.719391  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:44.719456  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:44.744438  285837 cri.go:89] found id: ""
	I1213 10:11:44.744467  285837 logs.go:282] 0 containers: []
	W1213 10:11:44.744476  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:44.744485  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:44.744498  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:44.815232  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:44.815321  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:44.836304  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:44.836331  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:44.928422  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:44.919472   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.920276   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.921982   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.922615   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.924346   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:44.919472   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.920276   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.921982   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.922615   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:44.924346   10786 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:44.928443  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:44.928456  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:44.954308  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:44.954348  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:47.482268  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:47.492724  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:47.492804  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:47.517619  285837 cri.go:89] found id: ""
	I1213 10:11:47.517646  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.517655  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:47.517661  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:47.517731  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:47.543100  285837 cri.go:89] found id: ""
	I1213 10:11:47.543137  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.543150  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:47.543160  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:47.543223  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:47.573882  285837 cri.go:89] found id: ""
	I1213 10:11:47.573906  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.573915  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:47.573922  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:47.573979  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:47.598649  285837 cri.go:89] found id: ""
	I1213 10:11:47.598676  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.598685  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:47.598692  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:47.598753  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:47.629998  285837 cri.go:89] found id: ""
	I1213 10:11:47.630034  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.630048  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:47.630056  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:47.630135  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:47.658608  285837 cri.go:89] found id: ""
	I1213 10:11:47.658652  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.658662  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:47.658669  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:47.658739  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:47.685293  285837 cri.go:89] found id: ""
	I1213 10:11:47.685337  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.685346  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:47.685352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:47.685419  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:47.711048  285837 cri.go:89] found id: ""
	I1213 10:11:47.711072  285837 logs.go:282] 0 containers: []
	W1213 10:11:47.711081  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:47.711091  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:47.711102  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:47.774561  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:47.774611  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:47.814155  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:47.814228  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:47.909982  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:47.900447   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.900974   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.902686   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.903281   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.905051   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:47.900447   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.900974   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.902686   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.903281   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:47.905051   10900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:47.910015  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:47.910028  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:47.938465  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:47.938502  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:50.475972  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:50.488352  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:50.488421  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:50.513516  285837 cri.go:89] found id: ""
	I1213 10:11:50.513548  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.513558  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:50.513565  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:50.513619  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:50.538473  285837 cri.go:89] found id: ""
	I1213 10:11:50.538498  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.538507  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:50.538513  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:50.538569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:50.562753  285837 cri.go:89] found id: ""
	I1213 10:11:50.562775  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.562784  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:50.562790  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:50.562844  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:50.587561  285837 cri.go:89] found id: ""
	I1213 10:11:50.587587  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.587597  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:50.587603  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:50.587658  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:50.612019  285837 cri.go:89] found id: ""
	I1213 10:11:50.612048  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.612058  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:50.612064  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:50.612123  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:50.636935  285837 cri.go:89] found id: ""
	I1213 10:11:50.636959  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.636967  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:50.636973  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:50.637034  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:50.661053  285837 cri.go:89] found id: ""
	I1213 10:11:50.661076  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.661085  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:50.661091  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:50.661148  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:50.690108  285837 cri.go:89] found id: ""
	I1213 10:11:50.690178  285837 logs.go:282] 0 containers: []
	W1213 10:11:50.690201  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:50.690223  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:50.690262  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:50.748741  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:50.748775  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:50.762458  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:50.762490  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:50.892763  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:50.884204   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.884596   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886208   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886796   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.888413   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:50.884204   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.884596   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886208   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.886796   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:50.888413   11008 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:50.892783  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:50.892796  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:50.918206  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:50.918240  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:53.447378  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:53.457486  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:53.457551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:53.482258  285837 cri.go:89] found id: ""
	I1213 10:11:53.482283  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.482292  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:53.482299  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:53.482357  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:53.511304  285837 cri.go:89] found id: ""
	I1213 10:11:53.511330  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.511339  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:53.511345  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:53.511405  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:53.540251  285837 cri.go:89] found id: ""
	I1213 10:11:53.540277  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.540286  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:53.540291  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:53.540349  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:53.565753  285837 cri.go:89] found id: ""
	I1213 10:11:53.565781  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.565791  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:53.565797  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:53.565855  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:53.595124  285837 cri.go:89] found id: ""
	I1213 10:11:53.595151  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.595160  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:53.595166  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:53.595224  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:53.620269  285837 cri.go:89] found id: ""
	I1213 10:11:53.620293  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.620302  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:53.620311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:53.620369  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:53.645281  285837 cri.go:89] found id: ""
	I1213 10:11:53.645309  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.645318  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:53.645325  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:53.645388  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:53.670326  285837 cri.go:89] found id: ""
	I1213 10:11:53.670351  285837 logs.go:282] 0 containers: []
	W1213 10:11:53.670360  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:53.670369  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:53.670386  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:53.726845  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:53.726879  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:53.740167  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:53.740194  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:53.843634  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:53.828906   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830317   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830864   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.835449   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.836015   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:53.828906   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830317   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.830864   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.835449   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:53.836015   11123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:53.843657  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:53.843669  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:53.870910  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:53.870995  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:56.405428  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:56.415940  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:56.416016  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:56.449974  285837 cri.go:89] found id: ""
	I1213 10:11:56.449996  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.450004  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:56.450010  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:56.450069  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:56.474847  285837 cri.go:89] found id: ""
	I1213 10:11:56.474873  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.474882  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:56.474888  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:56.474946  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:56.504742  285837 cri.go:89] found id: ""
	I1213 10:11:56.504768  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.504777  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:56.504783  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:56.504841  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:56.529471  285837 cri.go:89] found id: ""
	I1213 10:11:56.529493  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.529502  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:56.529509  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:56.529569  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:56.553719  285837 cri.go:89] found id: ""
	I1213 10:11:56.553740  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.553749  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:56.553755  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:56.553812  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:56.579917  285837 cri.go:89] found id: ""
	I1213 10:11:56.579942  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.579950  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:56.579957  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:56.580015  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:56.603606  285837 cri.go:89] found id: ""
	I1213 10:11:56.603629  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.603638  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:56.603644  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:56.603702  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:56.628438  285837 cri.go:89] found id: ""
	I1213 10:11:56.628460  285837 logs.go:282] 0 containers: []
	W1213 10:11:56.628469  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:56.628479  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:56.628491  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:11:56.655218  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:56.655245  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:56.711105  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:56.711138  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:56.724564  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:56.724597  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:56.800105  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:56.791391   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.792277   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.793993   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.794273   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.795851   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:56.791391   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.792277   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.793993   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.794273   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:56.795851   11249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:56.800126  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:56.800141  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:59.341824  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:11:59.351965  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:11:59.352032  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:11:59.376522  285837 cri.go:89] found id: ""
	I1213 10:11:59.376544  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.376553  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:11:59.376559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:11:59.376623  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:11:59.405422  285837 cri.go:89] found id: ""
	I1213 10:11:59.405497  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.405522  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:11:59.405537  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:11:59.405608  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:11:59.430317  285837 cri.go:89] found id: ""
	I1213 10:11:59.430344  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.430353  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:11:59.430359  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:11:59.430417  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:11:59.457827  285837 cri.go:89] found id: ""
	I1213 10:11:59.457854  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.457862  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:11:59.457868  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:11:59.457924  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:11:59.483234  285837 cri.go:89] found id: ""
	I1213 10:11:59.483261  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.483270  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:11:59.483277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:11:59.483337  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:11:59.508270  285837 cri.go:89] found id: ""
	I1213 10:11:59.508296  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.508314  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:11:59.508322  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:11:59.508379  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:11:59.532819  285837 cri.go:89] found id: ""
	I1213 10:11:59.532842  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.532851  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:11:59.532857  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:11:59.532913  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:11:59.556482  285837 cri.go:89] found id: ""
	I1213 10:11:59.556508  285837 logs.go:282] 0 containers: []
	W1213 10:11:59.556517  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:11:59.556527  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:11:59.556540  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:11:59.611281  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:11:59.611315  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:11:59.624666  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:11:59.624694  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:11:59.690085  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:11:59.681680   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.682124   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.683696   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.684084   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.685716   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:11:59.681680   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.682124   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.683696   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.684084   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:11:59.685716   11353 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:11:59.690108  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:11:59.690122  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:11:59.715666  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:11:59.715703  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:02.245206  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:02.256067  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:02.256147  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:02.280777  285837 cri.go:89] found id: ""
	I1213 10:12:02.280801  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.280809  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:02.280821  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:02.280885  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:02.305877  285837 cri.go:89] found id: ""
	I1213 10:12:02.305905  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.305914  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:02.305920  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:02.305988  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:02.330860  285837 cri.go:89] found id: ""
	I1213 10:12:02.330886  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.330894  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:02.330900  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:02.330965  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:02.356613  285837 cri.go:89] found id: ""
	I1213 10:12:02.356649  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.356659  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:02.356665  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:02.356746  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:02.388158  285837 cri.go:89] found id: ""
	I1213 10:12:02.388181  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.388190  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:02.388196  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:02.388256  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:02.415431  285837 cri.go:89] found id: ""
	I1213 10:12:02.415454  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.415462  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:02.415468  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:02.415538  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:02.442554  285837 cri.go:89] found id: ""
	I1213 10:12:02.442580  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.442589  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:02.442595  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:02.442654  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:02.468134  285837 cri.go:89] found id: ""
	I1213 10:12:02.468159  285837 logs.go:282] 0 containers: []
	W1213 10:12:02.468167  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:02.468177  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:02.468188  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:02.526799  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:02.526832  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:02.542508  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:02.542533  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:02.616614  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:02.606681   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.607231   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.609505   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.610854   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.611410   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:02.606681   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.607231   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.609505   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.610854   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:02.611410   11467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:02.616637  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:02.616650  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:02.641382  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:02.641415  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:05.169197  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:05.179948  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:05.180017  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:05.205082  285837 cri.go:89] found id: ""
	I1213 10:12:05.205105  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.205113  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:05.205119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:05.205176  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:05.234272  285837 cri.go:89] found id: ""
	I1213 10:12:05.234295  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.234305  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:05.234311  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:05.234369  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:05.259024  285837 cri.go:89] found id: ""
	I1213 10:12:05.259047  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.259055  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:05.259062  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:05.259120  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:05.287223  285837 cri.go:89] found id: ""
	I1213 10:12:05.287249  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.287257  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:05.287264  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:05.287323  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:05.311741  285837 cri.go:89] found id: ""
	I1213 10:12:05.311831  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.311859  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:05.311904  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:05.312016  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:05.337137  285837 cri.go:89] found id: ""
	I1213 10:12:05.337161  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.337170  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:05.337176  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:05.337232  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:05.361938  285837 cri.go:89] found id: ""
	I1213 10:12:05.361967  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.361976  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:05.361982  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:05.362063  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:05.387423  285837 cri.go:89] found id: ""
	I1213 10:12:05.387460  285837 logs.go:282] 0 containers: []
	W1213 10:12:05.387469  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:05.387478  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:05.387489  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:05.446385  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:05.446423  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:05.460052  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:05.460075  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:05.534925  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:05.526612   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.527034   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.528674   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.529348   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.530958   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:05.526612   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.527034   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.528674   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.529348   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:05.530958   11582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:05.534954  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:05.534969  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:05.561237  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:05.561278  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:08.090523  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:08.103723  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:08.103793  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:08.130437  285837 cri.go:89] found id: ""
	I1213 10:12:08.130464  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.130473  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:08.130479  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:08.130536  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:08.158259  285837 cri.go:89] found id: ""
	I1213 10:12:08.158286  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.158295  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:08.158301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:08.158359  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:08.183457  285837 cri.go:89] found id: ""
	I1213 10:12:08.183484  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.183493  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:08.183499  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:08.183589  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:08.207480  285837 cri.go:89] found id: ""
	I1213 10:12:08.207507  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.207613  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:08.207620  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:08.207681  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:08.231959  285837 cri.go:89] found id: ""
	I1213 10:12:08.232037  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.232053  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:08.232061  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:08.232131  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:08.255921  285837 cri.go:89] found id: ""
	I1213 10:12:08.255986  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.256003  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:08.256010  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:08.256074  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:08.280187  285837 cri.go:89] found id: ""
	I1213 10:12:08.280254  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.280269  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:08.280276  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:08.280332  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:08.308900  285837 cri.go:89] found id: ""
	I1213 10:12:08.308974  285837 logs.go:282] 0 containers: []
	W1213 10:12:08.308997  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:08.309014  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:08.309029  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:08.322959  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:08.322986  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:08.387674  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:08.378725   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.379335   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.380972   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.381487   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.383076   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:08.378725   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.379335   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.380972   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.381487   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:08.383076   11697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:08.387701  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:08.387715  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:08.413378  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:08.413414  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:08.444856  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:08.444888  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:11.000292  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:11.012216  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:11.012287  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:11.063803  285837 cri.go:89] found id: ""
	I1213 10:12:11.063829  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.063838  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:11.063845  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:11.063910  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:11.103072  285837 cri.go:89] found id: ""
	I1213 10:12:11.103099  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.103109  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:11.103115  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:11.103171  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:11.138581  285837 cri.go:89] found id: ""
	I1213 10:12:11.138606  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.138614  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:11.138631  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:11.138686  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:11.163663  285837 cri.go:89] found id: ""
	I1213 10:12:11.163735  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.163760  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:11.163779  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:11.163862  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:11.188635  285837 cri.go:89] found id: ""
	I1213 10:12:11.188701  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.188716  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:11.188722  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:11.188779  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:11.217597  285837 cri.go:89] found id: ""
	I1213 10:12:11.217620  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.217628  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:11.217634  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:11.217690  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:11.241986  285837 cri.go:89] found id: ""
	I1213 10:12:11.242009  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.242017  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:11.242023  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:11.242078  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:11.266556  285837 cri.go:89] found id: ""
	I1213 10:12:11.266578  285837 logs.go:282] 0 containers: []
	W1213 10:12:11.266586  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:11.266596  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:11.266607  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:11.298567  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:11.298592  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:11.354117  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:11.354151  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:11.367112  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:11.367187  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:11.430754  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:11.422695   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.423276   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.424837   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.425327   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.426819   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:11.422695   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.423276   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.424837   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.425327   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:11.426819   11825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:11.430832  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:11.430859  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:13.957251  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:13.968979  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:13.969058  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:13.994304  285837 cri.go:89] found id: ""
	I1213 10:12:13.994326  285837 logs.go:282] 0 containers: []
	W1213 10:12:13.994334  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:13.994341  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:13.994396  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:14.032552  285837 cri.go:89] found id: ""
	I1213 10:12:14.032584  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.032593  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:14.032600  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:14.032663  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:14.104797  285837 cri.go:89] found id: ""
	I1213 10:12:14.104823  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.104833  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:14.104839  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:14.104901  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:14.130796  285837 cri.go:89] found id: ""
	I1213 10:12:14.130821  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.130831  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:14.130837  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:14.130892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:14.157587  285837 cri.go:89] found id: ""
	I1213 10:12:14.157616  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.157625  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:14.157631  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:14.157689  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:14.183166  285837 cri.go:89] found id: ""
	I1213 10:12:14.183191  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.183199  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:14.183205  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:14.183271  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:14.207844  285837 cri.go:89] found id: ""
	I1213 10:12:14.207871  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.207880  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:14.207886  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:14.207943  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:14.232398  285837 cri.go:89] found id: ""
	I1213 10:12:14.232420  285837 logs.go:282] 0 containers: []
	W1213 10:12:14.232429  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:14.232438  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:14.232450  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:14.263838  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:14.263869  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:14.322835  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:14.322870  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:14.336577  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:14.336609  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:14.404961  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:14.396535   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.397107   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.398835   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.399389   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.400964   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:14.396535   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.397107   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.398835   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.399389   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:14.400964   11937 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:14.405007  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:14.405047  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:16.930423  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:16.941126  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:16.941197  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:16.968990  285837 cri.go:89] found id: ""
	I1213 10:12:16.969013  285837 logs.go:282] 0 containers: []
	W1213 10:12:16.969023  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:16.969029  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:16.969093  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:16.994277  285837 cri.go:89] found id: ""
	I1213 10:12:16.994298  285837 logs.go:282] 0 containers: []
	W1213 10:12:16.994307  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:16.994319  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:16.994374  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:17.052160  285837 cri.go:89] found id: ""
	I1213 10:12:17.052187  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.052196  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:17.052202  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:17.052260  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:17.112056  285837 cri.go:89] found id: ""
	I1213 10:12:17.112122  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.112136  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:17.112142  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:17.112201  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:17.137264  285837 cri.go:89] found id: ""
	I1213 10:12:17.137287  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.137295  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:17.137301  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:17.137356  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:17.161759  285837 cri.go:89] found id: ""
	I1213 10:12:17.161780  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.161802  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:17.161808  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:17.161864  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:17.187256  285837 cri.go:89] found id: ""
	I1213 10:12:17.187288  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.187296  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:17.187302  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:17.187372  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:17.213316  285837 cri.go:89] found id: ""
	I1213 10:12:17.213380  285837 logs.go:282] 0 containers: []
	W1213 10:12:17.213400  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:17.213413  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:17.213424  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:17.241644  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:17.241674  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:17.298584  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:17.298617  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:17.313303  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:17.313331  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:17.387719  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:17.378154   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.379329   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.380946   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.381429   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.382999   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:17.378154   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.379329   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.380946   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.381429   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:17.382999   12055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:17.387742  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:17.387755  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:19.919282  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:19.929646  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:19.929711  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:19.961717  285837 cri.go:89] found id: ""
	I1213 10:12:19.961739  285837 logs.go:282] 0 containers: []
	W1213 10:12:19.961748  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:19.961754  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:19.961811  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:19.986281  285837 cri.go:89] found id: ""
	I1213 10:12:19.986306  285837 logs.go:282] 0 containers: []
	W1213 10:12:19.986315  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:19.986321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:19.986375  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:20.035442  285837 cri.go:89] found id: ""
	I1213 10:12:20.035468  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.035478  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:20.035484  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:20.035574  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:20.086605  285837 cri.go:89] found id: ""
	I1213 10:12:20.086627  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.086635  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:20.086642  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:20.086698  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:20.121043  285837 cri.go:89] found id: ""
	I1213 10:12:20.121065  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.121073  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:20.121079  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:20.121136  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:20.148016  285837 cri.go:89] found id: ""
	I1213 10:12:20.148083  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.148105  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:20.148124  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:20.148209  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:20.175168  285837 cri.go:89] found id: ""
	I1213 10:12:20.175234  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.175257  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:20.175276  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:20.175363  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:20.206568  285837 cri.go:89] found id: ""
	I1213 10:12:20.206590  285837 logs.go:282] 0 containers: []
	W1213 10:12:20.206599  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:20.206608  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:20.206619  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:20.234244  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:20.234308  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:20.290937  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:20.290972  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:20.304498  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:20.304527  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:20.367763  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:20.358899   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.359439   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361173   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361714   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.363335   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:20.358899   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.359439   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361173   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.361714   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:20.363335   12172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:20.367830  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:20.367849  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:22.894711  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:22.905901  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:22.905969  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:22.936437  285837 cri.go:89] found id: ""
	I1213 10:12:22.936460  285837 logs.go:282] 0 containers: []
	W1213 10:12:22.936468  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:22.936474  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:22.936533  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:22.961367  285837 cri.go:89] found id: ""
	I1213 10:12:22.961390  285837 logs.go:282] 0 containers: []
	W1213 10:12:22.961416  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:22.961425  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:22.961484  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:22.984924  285837 cri.go:89] found id: ""
	I1213 10:12:22.984949  285837 logs.go:282] 0 containers: []
	W1213 10:12:22.984958  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:22.984964  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:22.985046  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:23.012110  285837 cri.go:89] found id: ""
	I1213 10:12:23.012175  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.012191  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:23.012198  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:23.012258  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:23.053789  285837 cri.go:89] found id: ""
	I1213 10:12:23.053816  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.053825  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:23.053831  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:23.053888  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:23.102082  285837 cri.go:89] found id: ""
	I1213 10:12:23.102104  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.102112  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:23.102118  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:23.102173  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:23.139793  285837 cri.go:89] found id: ""
	I1213 10:12:23.139820  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.139830  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:23.139836  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:23.139892  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:23.163400  285837 cri.go:89] found id: ""
	I1213 10:12:23.163426  285837 logs.go:282] 0 containers: []
	W1213 10:12:23.163436  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:23.163451  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:23.163464  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:23.227709  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:23.227744  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:23.241604  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:23.241631  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:23.305636  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:23.297583   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.298334   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.299957   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.300270   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.301534   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:23.297583   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.298334   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.299957   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.300270   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:23.301534   12271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:23.305670  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:23.305683  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:23.331847  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:23.331879  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:25.858551  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:25.871752  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:25.871822  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:25.897476  285837 cri.go:89] found id: ""
	I1213 10:12:25.897527  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.897536  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:25.897543  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:25.897600  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:25.925782  285837 cri.go:89] found id: ""
	I1213 10:12:25.925807  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.925817  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:25.925823  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:25.925906  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:25.949723  285837 cri.go:89] found id: ""
	I1213 10:12:25.949750  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.949760  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:25.949766  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:25.949842  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:25.973991  285837 cri.go:89] found id: ""
	I1213 10:12:25.974016  285837 logs.go:282] 0 containers: []
	W1213 10:12:25.974025  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:25.974032  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:25.974107  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:26.001033  285837 cri.go:89] found id: ""
	I1213 10:12:26.001056  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.001064  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:26.001070  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:26.001144  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:26.077273  285837 cri.go:89] found id: ""
	I1213 10:12:26.077300  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.077309  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:26.077316  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:26.077397  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:26.122203  285837 cri.go:89] found id: ""
	I1213 10:12:26.122230  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.122240  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:26.122246  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:26.122346  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:26.147712  285837 cri.go:89] found id: ""
	I1213 10:12:26.147736  285837 logs.go:282] 0 containers: []
	W1213 10:12:26.147745  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:26.147781  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:26.147799  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:26.203487  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:26.203528  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:26.217213  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:26.217246  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:26.284727  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:26.276312   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.277260   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.278746   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.279123   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.280769   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:26.276312   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.277260   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.278746   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.279123   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:26.280769   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:26.284751  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:26.284763  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:26.312716  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:26.312773  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:28.841875  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:28.852491  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:28.852562  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:28.881629  285837 cri.go:89] found id: ""
	I1213 10:12:28.881653  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.881662  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:28.881669  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:28.881728  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:28.906270  285837 cri.go:89] found id: ""
	I1213 10:12:28.906296  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.906306  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:28.906312  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:28.906370  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:28.931578  285837 cri.go:89] found id: ""
	I1213 10:12:28.931599  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.931607  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:28.931612  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:28.931666  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:28.957311  285837 cri.go:89] found id: ""
	I1213 10:12:28.957334  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.957343  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:28.957349  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:28.957406  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:28.981753  285837 cri.go:89] found id: ""
	I1213 10:12:28.981778  285837 logs.go:282] 0 containers: []
	W1213 10:12:28.981787  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:28.981794  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:28.981849  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:29.006917  285837 cri.go:89] found id: ""
	I1213 10:12:29.006945  285837 logs.go:282] 0 containers: []
	W1213 10:12:29.006955  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:29.006962  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:29.007029  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:29.066909  285837 cri.go:89] found id: ""
	I1213 10:12:29.066935  285837 logs.go:282] 0 containers: []
	W1213 10:12:29.066944  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:29.066950  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:29.067008  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:29.105599  285837 cri.go:89] found id: ""
	I1213 10:12:29.105625  285837 logs.go:282] 0 containers: []
	W1213 10:12:29.105633  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:29.105642  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:29.105652  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:29.130961  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:29.131003  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:29.157785  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:29.157819  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:29.213436  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:29.213472  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:29.227454  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:29.227485  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:29.298087  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:29.289850   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.290315   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.291761   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.292182   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.293636   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:29.289850   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.290315   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.291761   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.292182   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:29.293636   12519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:31.798509  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:31.809145  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:31.809221  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:31.833247  285837 cri.go:89] found id: ""
	I1213 10:12:31.833272  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.833281  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:31.833290  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:31.833348  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:31.861756  285837 cri.go:89] found id: ""
	I1213 10:12:31.861779  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.861789  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:31.861795  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:31.861851  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:31.885473  285837 cri.go:89] found id: ""
	I1213 10:12:31.885496  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.885506  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:31.885512  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:31.885566  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:31.908602  285837 cri.go:89] found id: ""
	I1213 10:12:31.908626  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.908634  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:31.908640  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:31.908695  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:31.933964  285837 cri.go:89] found id: ""
	I1213 10:12:31.933990  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.933999  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:31.934005  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:31.934063  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:31.962393  285837 cri.go:89] found id: ""
	I1213 10:12:31.962416  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.962424  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:31.962431  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:31.962490  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:31.986650  285837 cri.go:89] found id: ""
	I1213 10:12:31.986676  285837 logs.go:282] 0 containers: []
	W1213 10:12:31.986685  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:31.986692  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:31.986749  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:32.017192  285837 cri.go:89] found id: ""
	I1213 10:12:32.017220  285837 logs.go:282] 0 containers: []
	W1213 10:12:32.017229  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:32.017239  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:32.017252  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:32.035285  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:32.035316  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:32.145875  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:32.134854   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.135586   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.137228   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.140039   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.141704   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:32.134854   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.135586   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.137228   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.140039   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:32.141704   12619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:32.145896  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:32.145909  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:32.172371  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:32.172409  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:32.202803  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:32.202833  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:34.759246  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:34.770746  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:34.770823  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:34.798561  285837 cri.go:89] found id: ""
	I1213 10:12:34.798585  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.798594  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:34.798601  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:34.798664  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:34.824521  285837 cri.go:89] found id: ""
	I1213 10:12:34.824544  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.824553  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:34.824559  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:34.824616  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:34.848643  285837 cri.go:89] found id: ""
	I1213 10:12:34.848670  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.848680  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:34.848687  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:34.848746  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:34.874242  285837 cri.go:89] found id: ""
	I1213 10:12:34.874263  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.874271  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:34.874277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:34.874331  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:34.898270  285837 cri.go:89] found id: ""
	I1213 10:12:34.898298  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.898308  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:34.898314  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:34.898374  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:34.922469  285837 cri.go:89] found id: ""
	I1213 10:12:34.922492  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.922502  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:34.922508  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:34.922565  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:34.949223  285837 cri.go:89] found id: ""
	I1213 10:12:34.949250  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.949259  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:34.949266  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:34.949320  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:34.977644  285837 cri.go:89] found id: ""
	I1213 10:12:34.977675  285837 logs.go:282] 0 containers: []
	W1213 10:12:34.977685  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:34.977696  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:34.977707  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:35.038624  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:35.038662  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:35.079394  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:35.079475  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:35.160019  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:35.151819   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.152538   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154163   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154458   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.155960   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:35.151819   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.152538   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154163   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.154458   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:35.155960   12731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:35.160066  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:35.160078  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:35.186026  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:35.186058  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:37.713450  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:37.724509  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:37.724585  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:37.748173  285837 cri.go:89] found id: ""
	I1213 10:12:37.748197  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.748206  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:37.748213  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:37.748274  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:37.772262  285837 cri.go:89] found id: ""
	I1213 10:12:37.772285  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.772294  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:37.772312  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:37.772371  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:37.797053  285837 cri.go:89] found id: ""
	I1213 10:12:37.797077  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.797086  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:37.797093  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:37.797151  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:37.821445  285837 cri.go:89] found id: ""
	I1213 10:12:37.821468  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.821477  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:37.821484  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:37.821538  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:37.848175  285837 cri.go:89] found id: ""
	I1213 10:12:37.848199  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.848208  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:37.848214  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:37.848272  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:37.882751  285837 cri.go:89] found id: ""
	I1213 10:12:37.882774  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.882784  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:37.882789  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:37.882847  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:37.907236  285837 cri.go:89] found id: ""
	I1213 10:12:37.907262  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.907271  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:37.907277  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:37.907334  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:37.931030  285837 cri.go:89] found id: ""
	I1213 10:12:37.931053  285837 logs.go:282] 0 containers: []
	W1213 10:12:37.931061  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:37.931070  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:37.931082  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:37.944201  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:37.944228  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:38.014013  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:38.001007   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.001921   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.006866   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.007610   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.009335   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:38.001007   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.001921   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.006866   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.007610   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:38.009335   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:38.014037  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:38.014051  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:38.050241  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:38.050336  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:38.123205  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:38.123240  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:40.686197  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:40.696710  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:40.696797  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:40.726004  285837 cri.go:89] found id: ""
	I1213 10:12:40.726031  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.726040  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:40.726046  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:40.726104  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:40.751506  285837 cri.go:89] found id: ""
	I1213 10:12:40.751558  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.751567  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:40.751573  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:40.751637  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:40.777206  285837 cri.go:89] found id: ""
	I1213 10:12:40.777232  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.777241  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:40.777247  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:40.777307  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:40.806234  285837 cri.go:89] found id: ""
	I1213 10:12:40.806256  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.806264  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:40.806270  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:40.806326  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:40.835873  285837 cri.go:89] found id: ""
	I1213 10:12:40.835898  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.835907  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:40.835913  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:40.835969  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:40.861792  285837 cri.go:89] found id: ""
	I1213 10:12:40.861821  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.861830  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:40.861836  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:40.861897  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:40.887384  285837 cri.go:89] found id: ""
	I1213 10:12:40.887409  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.887418  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:40.887424  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:40.887482  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:40.918474  285837 cri.go:89] found id: ""
	I1213 10:12:40.918499  285837 logs.go:282] 0 containers: []
	W1213 10:12:40.918508  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:40.918518  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:40.918529  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:40.974634  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:40.974669  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:40.988450  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:40.988481  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:41.102570  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:41.091923   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.092616   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094183   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094723   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.096608   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:41.091923   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.092616   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094183   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.094723   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:41.096608   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:41.102639  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:41.102664  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:41.132124  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:41.132159  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:43.660524  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:43.671119  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:43.671190  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:43.696313  285837 cri.go:89] found id: ""
	I1213 10:12:43.696343  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.696356  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:43.696364  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:43.696422  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:43.720831  285837 cri.go:89] found id: ""
	I1213 10:12:43.720856  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.720865  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:43.720871  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:43.720930  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:43.745280  285837 cri.go:89] found id: ""
	I1213 10:12:43.745305  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.745314  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:43.745321  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:43.745382  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:43.771809  285837 cri.go:89] found id: ""
	I1213 10:12:43.771832  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.771842  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:43.771848  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:43.771919  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:43.795691  285837 cri.go:89] found id: ""
	I1213 10:12:43.795715  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.795725  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:43.795731  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:43.795789  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:43.821222  285837 cri.go:89] found id: ""
	I1213 10:12:43.821246  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.821254  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:43.821261  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:43.821316  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:43.849405  285837 cri.go:89] found id: ""
	I1213 10:12:43.849428  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.849437  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:43.849450  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:43.849515  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:43.874124  285837 cri.go:89] found id: ""
	I1213 10:12:43.874150  285837 logs.go:282] 0 containers: []
	W1213 10:12:43.874159  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:43.874167  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:43.874178  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:43.938106  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:43.929845   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.930320   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932022   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932327   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.933807   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:43.929845   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.930320   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932022   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.932327   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:43.933807   13068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:43.938129  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:43.938141  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:43.963803  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:43.963838  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:43.994003  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:43.994030  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:44.069701  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:44.069786  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:46.587357  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:46.597851  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:46.597931  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:46.622018  285837 cri.go:89] found id: ""
	I1213 10:12:46.622044  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.622054  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:46.622060  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:46.622119  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:46.647494  285837 cri.go:89] found id: ""
	I1213 10:12:46.647537  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.647547  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:46.647553  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:46.647612  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:46.673199  285837 cri.go:89] found id: ""
	I1213 10:12:46.673223  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.673237  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:46.673243  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:46.673302  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:46.702715  285837 cri.go:89] found id: ""
	I1213 10:12:46.702777  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.702799  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:46.702818  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:46.702888  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:46.732013  285837 cri.go:89] found id: ""
	I1213 10:12:46.732036  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.732044  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:46.732049  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:46.732111  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:46.755882  285837 cri.go:89] found id: ""
	I1213 10:12:46.755907  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.755925  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:46.755933  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:46.755993  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:46.780993  285837 cri.go:89] found id: ""
	I1213 10:12:46.781016  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.781025  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:46.781031  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:46.781094  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:46.806179  285837 cri.go:89] found id: ""
	I1213 10:12:46.806255  285837 logs.go:282] 0 containers: []
	W1213 10:12:46.806280  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:46.806305  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:46.806342  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:46.863518  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:46.863553  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:46.877399  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:46.877428  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:46.946626  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:46.939032   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.939507   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941182   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941602   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.942800   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:46.939032   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.939507   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941182   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.941602   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:46.942800   13191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:46.946696  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:46.946739  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:46.972274  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:46.972306  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:49.510021  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:49.520415  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:12:49.520489  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:12:49.544492  285837 cri.go:89] found id: ""
	I1213 10:12:49.544515  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.544524  285837 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:12:49.544531  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1213 10:12:49.544595  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:12:49.574538  285837 cri.go:89] found id: ""
	I1213 10:12:49.574564  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.574573  285837 logs.go:284] No container was found matching "etcd"
	I1213 10:12:49.574593  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1213 10:12:49.574659  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:12:49.603237  285837 cri.go:89] found id: ""
	I1213 10:12:49.603267  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.603277  285837 logs.go:284] No container was found matching "coredns"
	I1213 10:12:49.603283  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:12:49.603339  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:12:49.627482  285837 cri.go:89] found id: ""
	I1213 10:12:49.627508  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.627547  285837 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:12:49.627555  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:12:49.627635  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:12:49.652503  285837 cri.go:89] found id: ""
	I1213 10:12:49.652532  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.652541  285837 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:12:49.652547  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:12:49.652620  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:12:49.677443  285837 cri.go:89] found id: ""
	I1213 10:12:49.677474  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.677483  285837 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:12:49.677490  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1213 10:12:49.677551  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:12:49.702698  285837 cri.go:89] found id: ""
	I1213 10:12:49.702723  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.702733  285837 logs.go:284] No container was found matching "kindnet"
	I1213 10:12:49.702750  285837 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 10:12:49.702813  285837 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 10:12:49.731706  285837 cri.go:89] found id: ""
	I1213 10:12:49.731727  285837 logs.go:282] 0 containers: []
	W1213 10:12:49.731735  285837 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 10:12:49.731750  285837 logs.go:123] Gathering logs for kubelet ...
	I1213 10:12:49.731762  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:12:49.787702  285837 logs.go:123] Gathering logs for dmesg ...
	I1213 10:12:49.787741  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:12:49.801570  285837 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:12:49.801602  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:12:49.870136  285837 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:12:49.861042   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.862455   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.863332   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.864338   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.865026   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:12:49.861042   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.862455   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.863332   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.864338   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:12:49.865026   13305 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:12:49.870158  285837 logs.go:123] Gathering logs for containerd ...
	I1213 10:12:49.870171  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1213 10:12:49.896174  285837 logs.go:123] Gathering logs for container status ...
	I1213 10:12:49.896211  285837 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:12:52.425030  285837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:12:52.438702  285837 out.go:203] 
	W1213 10:12:52.441528  285837 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 10:12:52.441562  285837 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 10:12:52.441572  285837 out.go:285] * Related issues:
	W1213 10:12:52.441583  285837 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 10:12:52.441596  285837 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 10:12:52.444462  285837 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139688152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139757700Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139854054Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139930494Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.139999639Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140070369Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140128880Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140186801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140255347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140343226Z" level=info msg="Connect containerd service"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.140691374Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.141400233Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.153338815Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.153402373Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.153439969Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.153482546Z" level=info msg="Start recovering state"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194202999Z" level=info msg="Start event monitor"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194399260Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194485793Z" level=info msg="Start streaming server"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194562487Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194779253Z" level=info msg="runtime interface starting up..."
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194850729Z" level=info msg="starting plugins..."
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.194929983Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:06:50 newest-cni-987495 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:06:50 newest-cni-987495 containerd[556]: time="2025-12-13T10:06:50.196601776Z" level=info msg="containerd successfully booted in 0.081602s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:13:05.596269   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:13:05.596897   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:13:05.598384   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:13:05.598860   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:13:05.600303   13975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 10:13:05 up  1:55,  0 user,  load average: 0.81, 0.66, 1.07
	Linux newest-cni-987495 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:13:02 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:13:03 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 13 10:13:03 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:03 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:03 newest-cni-987495 kubelet[13836]: E1213 10:13:03.357945   13836 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:13:03 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:13:03 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:13:03 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 13 10:13:03 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:03 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:04 newest-cni-987495 kubelet[13870]: E1213 10:13:04.046873   13870 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:13:04 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:13:04 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:13:04 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 13 10:13:04 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:04 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:04 newest-cni-987495 kubelet[13878]: E1213 10:13:04.834782   13878 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:13:04 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:13:04 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:13:05 newest-cni-987495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 13 10:13:05 newest-cni-987495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:05 newest-cni-987495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:13:05 newest-cni-987495 kubelet[13968]: E1213 10:13:05.584481   13968 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:13:05 newest-cni-987495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:13:05 newest-cni-987495 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-987495 -n newest-cni-987495: exit status 2 (352.026153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-987495" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (269.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1213 10:18:02.542092    4120 config.go:182] Loaded profile config "calico-324081": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:18:51.887714    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:19:30.358700    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:19:30.365165    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:19:30.376683    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:19:30.398221    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:19:30.439601    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:19:30.521338    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:19:30.682927    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1213 10:19:31.291057    4120 config.go:182] Loaded profile config "custom-flannel-324081": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:19:32.933253    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:19:35.494786    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:19:35.553461    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:19:40.616289    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:19:50.857717    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:20:11.200955    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/default-k8s-diff-port-544967/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:20:11.339855    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:20:52.301102    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:21:20.457666    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:21:20.464040    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:21:20.475564    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:21:20.497011    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:21:20.538502    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:21:20.619894    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:21:20.781483    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:21:21.103304    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:21:21.744944    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:21:25.589438    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:21:30.711223    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1213 10:21:40.007071    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069: exit status 2 (300.422493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-328069" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-328069 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-328069 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.649µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-328069 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-328069
helpers_test.go:244: (dbg) docker inspect no-preload-328069:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	        "Created": "2025-12-13T09:51:52.758803928Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 279480,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:02:12.212548985Z",
	            "FinishedAt": "2025-12-13T10:02:10.889738311Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/hosts",
	        "LogPath": "/var/lib/docker/containers/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f/fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f-json.log",
	        "Name": "/no-preload-328069",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-328069:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-328069",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe2aab224d926a8be60e1a33e314e7ce9682710e829de54a2fff2595e9eb016f",
	                "LowerDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b715fefe582739ae911a2ccdcc63622ab2a15c11e15a100eef66d073c765a96/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-328069",
	                "Source": "/var/lib/docker/volumes/no-preload-328069/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-328069",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-328069",
	                "name.minikube.sigs.k8s.io": "no-preload-328069",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e549dceaa2628f46a792f0513237bae1c9187e2280b148782465d5223dc837ce",
	            "SandboxKey": "/var/run/docker/netns/e549dceaa262",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-328069": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:94:67:0e:78:62",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "73962579d63346bb2ce9ff6da0af02a105fd1aac4b494f0d3fbfb09d208e1ce7",
	                    "EndpointID": "1f33b140f1554f462bc470ee8cae381e2b3ff6375e4e1f2dfdc3776ccc0d5791",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-328069",
	                        "fe2aab224d92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069: exit status 2 (383.632292ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-328069 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-324081 sudo systemctl status kubelet --all --full --no-pager                                                             │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo systemctl cat kubelet --no-pager                                                                             │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo journalctl -xeu kubelet --all --full --no-pager                                                              │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo cat /etc/kubernetes/kubelet.conf                                                                             │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo cat /var/lib/kubelet/config.yaml                                                                             │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo systemctl status docker --all --full --no-pager                                                              │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ ssh     │ -p enable-default-cni-324081 sudo systemctl cat docker --no-pager                                                                              │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo cat /etc/docker/daemon.json                                                                                  │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ ssh     │ -p enable-default-cni-324081 sudo docker system info                                                                                           │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ ssh     │ -p enable-default-cni-324081 sudo systemctl status cri-docker --all --full --no-pager                                                          │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ ssh     │ -p enable-default-cni-324081 sudo systemctl cat cri-docker --no-pager                                                                          │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                     │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ ssh     │ -p enable-default-cni-324081 sudo cat /usr/lib/systemd/system/cri-docker.service                                                               │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo cri-dockerd --version                                                                                        │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo systemctl status containerd --all --full --no-pager                                                          │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo systemctl cat containerd --no-pager                                                                          │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo cat /lib/systemd/system/containerd.service                                                                   │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo cat /etc/containerd/config.toml                                                                              │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo containerd config dump                                                                                       │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo systemctl status crio --all --full --no-pager                                                                │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	│ ssh     │ -p enable-default-cni-324081 sudo systemctl cat crio --no-pager                                                                                │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                      │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ ssh     │ -p enable-default-cni-324081 sudo crio config                                                                                                  │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ delete  │ -p enable-default-cni-324081                                                                                                                   │ enable-default-cni-324081 │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │ 13 Dec 25 10:21 UTC │
	│ start   │ -p flannel-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd │ flannel-324081            │ jenkins │ v1.37.0 │ 13 Dec 25 10:21 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:21:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:21:42.877502  340111 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:21:42.877620  340111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:21:42.877631  340111 out.go:374] Setting ErrFile to fd 2...
	I1213 10:21:42.877637  340111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:21:42.877879  340111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 10:21:42.878305  340111 out.go:368] Setting JSON to false
	I1213 10:21:42.879127  340111 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7455,"bootTime":1765613848,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:21:42.879190  340111 start.go:143] virtualization:  
	I1213 10:21:42.882735  340111 out.go:179] * [flannel-324081] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:21:42.886890  340111 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 10:21:42.887046  340111 notify.go:221] Checking for updates...
	I1213 10:21:42.893138  340111 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:21:42.896220  340111 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 10:21:42.899276  340111 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 10:21:42.902286  340111 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:21:42.905241  340111 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:21:42.908633  340111 config.go:182] Loaded profile config "no-preload-328069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 10:21:42.908757  340111 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:21:42.941102  340111 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:21:42.941229  340111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:21:43.001656  340111 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:21:42.991860921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:21:43.001765  340111 docker.go:319] overlay module found
	I1213 10:21:43.007870  340111 out.go:179] * Using the docker driver based on user configuration
	I1213 10:21:43.010927  340111 start.go:309] selected driver: docker
	I1213 10:21:43.010955  340111 start.go:927] validating driver "docker" against <nil>
	I1213 10:21:43.010973  340111 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:21:43.011750  340111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:21:43.102256  340111 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:21:43.092056224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:21:43.102432  340111 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:21:43.102661  340111 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:21:43.105600  340111 out.go:179] * Using Docker driver with root privileges
	I1213 10:21:43.108630  340111 cni.go:84] Creating CNI manager for "flannel"
	I1213 10:21:43.108657  340111 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1213 10:21:43.108730  340111 start.go:353] cluster config:
	{Name:flannel-324081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-324081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:21:43.111890  340111 out.go:179] * Starting "flannel-324081" primary control-plane node in "flannel-324081" cluster
	I1213 10:21:43.114847  340111 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 10:21:43.117820  340111 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:21:43.120593  340111 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 10:21:43.120636  340111 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1213 10:21:43.120650  340111 cache.go:65] Caching tarball of preloaded images
	I1213 10:21:43.120675  340111 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:21:43.120730  340111 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1213 10:21:43.120740  340111 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1213 10:21:43.120853  340111 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/flannel-324081/config.json ...
	I1213 10:21:43.120872  340111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/flannel-324081/config.json: {Name:mk356de5b8398784a1195867a7b023e26c1f52ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:21:43.140247  340111 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:21:43.140271  340111 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:21:43.140291  340111 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:21:43.140320  340111 start.go:360] acquireMachinesLock for flannel-324081: {Name:mkda6f6b85dc3abeef97c96732808f087472bfab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:21:43.140451  340111 start.go:364] duration metric: took 110.68µs to acquireMachinesLock for "flannel-324081"
	I1213 10:21:43.140484  340111 start.go:93] Provisioning new machine with config: &{Name:flannel-324081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-324081 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1213 10:21:43.140553  340111 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:21:43.143795  340111 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 10:21:43.144024  340111 start.go:159] libmachine.API.Create for "flannel-324081" (driver="docker")
	I1213 10:21:43.144062  340111 client.go:173] LocalClient.Create starting
	I1213 10:21:43.144126  340111 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem
	I1213 10:21:43.144166  340111 main.go:143] libmachine: Decoding PEM data...
	I1213 10:21:43.144186  340111 main.go:143] libmachine: Parsing certificate...
	I1213 10:21:43.144243  340111 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem
	I1213 10:21:43.144264  340111 main.go:143] libmachine: Decoding PEM data...
	I1213 10:21:43.144276  340111 main.go:143] libmachine: Parsing certificate...
	I1213 10:21:43.144639  340111 cli_runner.go:164] Run: docker network inspect flannel-324081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:21:43.160568  340111 cli_runner.go:211] docker network inspect flannel-324081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:21:43.160665  340111 network_create.go:284] running [docker network inspect flannel-324081] to gather additional debugging logs...
	I1213 10:21:43.160684  340111 cli_runner.go:164] Run: docker network inspect flannel-324081
	W1213 10:21:43.176163  340111 cli_runner.go:211] docker network inspect flannel-324081 returned with exit code 1
	I1213 10:21:43.176194  340111 network_create.go:287] error running [docker network inspect flannel-324081]: docker network inspect flannel-324081: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-324081 not found
	I1213 10:21:43.176210  340111 network_create.go:289] output of [docker network inspect flannel-324081]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-324081 not found
	
	** /stderr **
	I1213 10:21:43.176325  340111 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:21:43.192262  340111 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c365c32601a1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:cc:58:ff:fb:ac} reservation:<nil>}
	I1213 10:21:43.192618  340111 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5d2e6ae00d26 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:06:4d:2a:bb:ba} reservation:<nil>}
	I1213 10:21:43.192980  340111 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f516d686012e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:e6:44:cc:3a:d0} reservation:<nil>}
	I1213 10:21:43.193242  340111 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-73962579d633 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:d5:99:41:ca:7d} reservation:<nil>}
	I1213 10:21:43.193664  340111 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ef380}
	I1213 10:21:43.193687  340111 network_create.go:124] attempt to create docker network flannel-324081 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 10:21:43.193748  340111 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-324081 flannel-324081
	I1213 10:21:43.255227  340111 network_create.go:108] docker network flannel-324081 192.168.85.0/24 created
	I1213 10:21:43.255271  340111 kic.go:121] calculated static IP "192.168.85.2" for the "flannel-324081" container
	I1213 10:21:43.255354  340111 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:21:43.272499  340111 cli_runner.go:164] Run: docker volume create flannel-324081 --label name.minikube.sigs.k8s.io=flannel-324081 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:21:43.290113  340111 oci.go:103] Successfully created a docker volume flannel-324081
	I1213 10:21:43.290196  340111 cli_runner.go:164] Run: docker run --rm --name flannel-324081-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-324081 --entrypoint /usr/bin/test -v flannel-324081:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:21:43.813453  340111 oci.go:107] Successfully prepared a docker volume flannel-324081
	I1213 10:21:43.813517  340111 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1213 10:21:43.813527  340111 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:21:43.813585  340111 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v flannel-324081:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953597846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953660485Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953767875Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953849370Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953910048Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.953970676Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954065652Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954126674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954193457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954302308Z" level=info msg="Connect containerd service"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.954668147Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.955354550Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.966201405Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.966268516Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.966298096Z" level=info msg="Start subscribing containerd event"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.966337842Z" level=info msg="Start recovering state"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985180374Z" level=info msg="Start event monitor"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985226668Z" level=info msg="Start cni network conf syncer for default"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985236646Z" level=info msg="Start streaming server"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985245721Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985254336Z" level=info msg="runtime interface starting up..."
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985260564Z" level=info msg="starting plugins..."
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.985290447Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:02:17 no-preload-328069 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 13 10:02:17 no-preload-328069 containerd[554]: time="2025-12-13T10:02:17.987150021Z" level=info msg="containerd successfully booted in 0.060163s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:21:53.027138   10246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:21:53.027848   10246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:21:53.029651   10246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:21:53.030192   10246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 10:21:53.031813   10246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> kernel <==
	 10:21:53 up  2:04,  0 user,  load average: 1.93, 1.64, 1.38
	Linux no-preload-328069 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:21:49 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:21:50 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1561.
	Dec 13 10:21:50 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:21:50 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:21:50 no-preload-328069 kubelet[10110]: E1213 10:21:50.566650   10110 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:21:50 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:21:50 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:21:51 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1562.
	Dec 13 10:21:51 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:21:51 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:21:51 no-preload-328069 kubelet[10116]: E1213 10:21:51.321320   10116 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:21:51 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:21:51 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:21:52 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1563.
	Dec 13 10:21:52 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:21:52 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:21:52 no-preload-328069 kubelet[10138]: E1213 10:21:52.091721   10138 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:21:52 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:21:52 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:21:52 no-preload-328069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1564.
	Dec 13 10:21:52 no-preload-328069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:21:52 no-preload-328069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:21:52 no-preload-328069 kubelet[10218]: E1213 10:21:52.840743   10218 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:21:52 no-preload-328069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:21:52 no-preload-328069 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-328069 -n no-preload-328069: exit status 2 (427.89441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-328069" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (269.30s)
E1213 10:23:21.912061    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:23:21.918550    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:23:21.930966    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:23:21.952714    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:23:21.996285    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:23:22.078111    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:23:22.239624    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:23:22.560886    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:23:23.202346    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:23:24.484853    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:23:27.046838    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/no-preload-328069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (345/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.54
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 3.35
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.12
18 TestDownloadOnly/v1.34.2/DeleteAll 0.25
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.69
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.59
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 169.59
38 TestAddons/serial/Volcano 41.75
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/serial/GCPAuth/FakeCredentials 8.9
44 TestAddons/parallel/Registry 16.39
45 TestAddons/parallel/RegistryCreds 0.8
46 TestAddons/parallel/Ingress 18.05
47 TestAddons/parallel/InspektorGadget 11.85
48 TestAddons/parallel/MetricsServer 6.94
50 TestAddons/parallel/CSI 62.43
51 TestAddons/parallel/Headlamp 17.02
52 TestAddons/parallel/CloudSpanner 6.59
53 TestAddons/parallel/LocalPath 52.19
54 TestAddons/parallel/NvidiaDevicePlugin 5.56
55 TestAddons/parallel/Yakd 11.75
57 TestAddons/StoppedEnableDisable 12.36
58 TestCertOptions 34.31
59 TestCertExpiration 220.84
61 TestForceSystemdFlag 36.06
62 TestForceSystemdEnv 36.66
63 TestDockerEnvContainerd 44.92
67 TestErrorSpam/setup 28.33
68 TestErrorSpam/start 0.8
69 TestErrorSpam/status 1.11
70 TestErrorSpam/pause 1.73
71 TestErrorSpam/unpause 1.84
72 TestErrorSpam/stop 1.64
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 49.74
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.99
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
84 TestFunctional/serial/CacheCmd/cache/add_local 1.44
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 47.81
93 TestFunctional/serial/ComponentHealth 0.09
94 TestFunctional/serial/LogsCmd 1.48
95 TestFunctional/serial/LogsFileCmd 1.47
96 TestFunctional/serial/InvalidService 4.32
98 TestFunctional/parallel/ConfigCmd 0.47
99 TestFunctional/parallel/DashboardCmd 7.01
100 TestFunctional/parallel/DryRun 0.61
101 TestFunctional/parallel/InternationalLanguage 0.21
102 TestFunctional/parallel/StatusCmd 1.21
106 TestFunctional/parallel/ServiceCmdConnect 7.62
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 21.12
110 TestFunctional/parallel/SSHCmd 0.7
111 TestFunctional/parallel/CpCmd 2.11
113 TestFunctional/parallel/FileSync 0.35
114 TestFunctional/parallel/CertSync 2.23
118 TestFunctional/parallel/NodeLabels 0.13
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
122 TestFunctional/parallel/License 0.34
123 TestFunctional/parallel/Version/short 0.05
124 TestFunctional/parallel/Version/components 1.35
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
129 TestFunctional/parallel/ImageCommands/ImageBuild 3.96
130 TestFunctional/parallel/ImageCommands/Setup 0.69
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.51
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.48
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.44
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.63
138 TestFunctional/parallel/ProfileCmd/profile_list 0.51
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.56
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.9
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
156 TestFunctional/parallel/ServiceCmd/List 0.63
157 TestFunctional/parallel/MountCmd/any-port 9.63
158 TestFunctional/parallel/ServiceCmd/JSONOutput 0.68
159 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
160 TestFunctional/parallel/ServiceCmd/Format 0.46
161 TestFunctional/parallel/ServiceCmd/URL 0.53
162 TestFunctional/parallel/MountCmd/specific-port 2.11
163 TestFunctional/parallel/MountCmd/VerifyCleanup 2.16
164 TestFunctional/delete_echo-server_images 0.05
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.38
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.01
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.3
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.86
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.94
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.94
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.45
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.46
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.7
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.24
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.31
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.72
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.56
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.28
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.39
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.37
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.95
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.07
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.48
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.24
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.55
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.28
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.13
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.06
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.36
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.35
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.47
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.68
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.52
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.16
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.14
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 185.3
265 TestMultiControlPlane/serial/DeployApp 7.62
266 TestMultiControlPlane/serial/PingHostFromPods 1.59
267 TestMultiControlPlane/serial/AddWorkerNode 59.26
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
270 TestMultiControlPlane/serial/CopyFile 21.25
271 TestMultiControlPlane/serial/StopSecondaryNode 12.97
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
273 TestMultiControlPlane/serial/RestartSecondaryNode 13.41
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.34
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 106.83
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.2
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.87
278 TestMultiControlPlane/serial/StopCluster 36.66
279 TestMultiControlPlane/serial/RestartCluster 60.74
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.85
281 TestMultiControlPlane/serial/AddSecondaryNode 76.51
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
287 TestJSONOutput/start/Command 79.36
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.71
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.64
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.98
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.23
312 TestKicCustomNetwork/create_custom_network 38.34
313 TestKicCustomNetwork/use_default_bridge_network 36.54
314 TestKicExistingNetwork 37.43
315 TestKicCustomSubnet 35.25
316 TestKicStaticIP 33.53
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 68.25
321 TestMountStart/serial/StartWithMountFirst 5.93
322 TestMountStart/serial/VerifyMountFirst 0.31
323 TestMountStart/serial/StartWithMountSecond 8.46
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.71
326 TestMountStart/serial/VerifyMountPostDelete 0.26
327 TestMountStart/serial/Stop 1.3
328 TestMountStart/serial/RestartStopped 7.49
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 104.61
333 TestMultiNode/serial/DeployApp2Nodes 5.05
334 TestMultiNode/serial/PingHostFrom2Pods 0.98
335 TestMultiNode/serial/AddNode 27.42
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.71
338 TestMultiNode/serial/CopyFile 10.57
339 TestMultiNode/serial/StopNode 2.45
340 TestMultiNode/serial/StartAfterStop 7.89
341 TestMultiNode/serial/RestartKeepsNodes 77.77
342 TestMultiNode/serial/DeleteNode 5.74
343 TestMultiNode/serial/StopMultiNode 24.06
344 TestMultiNode/serial/RestartMultiNode 52.72
345 TestMultiNode/serial/ValidateNameConflict 37.01
350 TestPreload 119.21
352 TestScheduledStopUnix 108.81
355 TestInsufficientStorage 9.96
356 TestRunningBinaryUpgrade 61.92
359 TestMissingContainerUpgrade 128.14
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
362 TestNoKubernetes/serial/StartWithK8s 49.87
363 TestNoKubernetes/serial/StartWithStopK8s 17.37
364 TestNoKubernetes/serial/Start 7.21
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
367 TestNoKubernetes/serial/ProfileList 0.71
368 TestNoKubernetes/serial/Stop 1.28
369 TestNoKubernetes/serial/StartNoArgs 6.45
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
371 TestStoppedBinaryUpgrade/Setup 0.81
372 TestStoppedBinaryUpgrade/Upgrade 57.96
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.87
382 TestPause/serial/Start 83.81
383 TestPause/serial/SecondStartNoReconfiguration 6.85
384 TestPause/serial/Pause 0.74
385 TestPause/serial/VerifyStatus 0.39
386 TestPause/serial/Unpause 0.8
387 TestPause/serial/PauseAgain 0.85
388 TestPause/serial/DeletePaused 2.49
389 TestPause/serial/VerifyDeletedResources 0.39
397 TestNetworkPlugins/group/false 4.21
402 TestStartStop/group/old-k8s-version/serial/FirstStart 62.97
403 TestStartStop/group/old-k8s-version/serial/DeployApp 9.48
404 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.35
405 TestStartStop/group/old-k8s-version/serial/Stop 12.13
406 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
407 TestStartStop/group/old-k8s-version/serial/SecondStart 49.35
408 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
409 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
410 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
411 TestStartStop/group/old-k8s-version/serial/Pause 3.28
413 TestStartStop/group/embed-certs/serial/FirstStart 77.11
416 TestStartStop/group/embed-certs/serial/DeployApp 8.35
417 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
418 TestStartStop/group/embed-certs/serial/Stop 12.17
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
420 TestStartStop/group/embed-certs/serial/SecondStart 51.19
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
423 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
424 TestStartStop/group/embed-certs/serial/Pause 3.1
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.14
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.41
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.14
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.27
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
434 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
435 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.07
440 TestStartStop/group/no-preload/serial/Stop 1.29
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
443 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/Stop 1.3
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
453 TestNetworkPlugins/group/auto/Start 80.18
454 TestNetworkPlugins/group/auto/KubeletFlags 0.34
455 TestNetworkPlugins/group/auto/NetCatPod 10.28
456 TestNetworkPlugins/group/auto/DNS 0.17
457 TestNetworkPlugins/group/auto/Localhost 0.14
458 TestNetworkPlugins/group/auto/HairPin 0.14
459 TestNetworkPlugins/group/kindnet/Start 78.96
460 TestNetworkPlugins/group/kindnet/ControllerPod 6
461 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
462 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
463 TestNetworkPlugins/group/kindnet/DNS 0.2
464 TestNetworkPlugins/group/kindnet/Localhost 0.16
465 TestNetworkPlugins/group/kindnet/HairPin 0.15
466 TestNetworkPlugins/group/calico/Start 58.04
468 TestNetworkPlugins/group/calico/ControllerPod 6.01
469 TestNetworkPlugins/group/calico/KubeletFlags 0.34
470 TestNetworkPlugins/group/calico/NetCatPod 10.3
471 TestNetworkPlugins/group/calico/DNS 0.18
472 TestNetworkPlugins/group/calico/Localhost 0.14
473 TestNetworkPlugins/group/calico/HairPin 0.18
474 TestNetworkPlugins/group/custom-flannel/Start 55.63
475 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
476 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
477 TestNetworkPlugins/group/custom-flannel/DNS 0.18
478 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
479 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
480 TestNetworkPlugins/group/enable-default-cni/Start 68.69
481 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
482 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.27
483 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
484 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
485 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
486 TestNetworkPlugins/group/flannel/Start 69.99
487 TestNetworkPlugins/group/bridge/Start 51.1
488 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
489 TestNetworkPlugins/group/bridge/NetCatPod 9.31
490 TestNetworkPlugins/group/flannel/ControllerPod 6
491 TestNetworkPlugins/group/bridge/DNS 0.17
492 TestNetworkPlugins/group/bridge/Localhost 0.14
493 TestNetworkPlugins/group/bridge/HairPin 0.15
494 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
495 TestNetworkPlugins/group/flannel/NetCatPod 10.27
496 TestNetworkPlugins/group/flannel/DNS 0.25
497 TestNetworkPlugins/group/flannel/Localhost 0.18
498 TestNetworkPlugins/group/flannel/HairPin 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (5.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-688736 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-688736 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.536867313s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 08:29:14.000974    4120 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1213 08:29:14.001057    4120 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-688736
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-688736: exit status 85 (95.112145ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-688736 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-688736 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:29:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:29:08.508693    4126 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:29:08.508900    4126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:08.508926    4126 out.go:374] Setting ErrFile to fd 2...
	I1213 08:29:08.508946    4126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:08.509268    4126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	W1213 08:29:08.509480    4126 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22128-2315/.minikube/config/config.json: open /home/jenkins/minikube-integration/22128-2315/.minikube/config/config.json: no such file or directory
	I1213 08:29:08.509961    4126 out.go:368] Setting JSON to true
	I1213 08:29:08.510757    4126 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":701,"bootTime":1765613848,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:29:08.510849    4126 start.go:143] virtualization:  
	I1213 08:29:08.516476    4126 out.go:99] [download-only-688736] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1213 08:29:08.516690    4126 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 08:29:08.516811    4126 notify.go:221] Checking for updates...
	I1213 08:29:08.521014    4126 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:29:08.524553    4126 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:29:08.527902    4126 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:29:08.531061    4126 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:29:08.534280    4126 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 08:29:08.540654    4126 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:29:08.540924    4126 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:29:08.574202    4126 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:29:08.574332    4126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:29:08.966283    4126 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-13 08:29:08.956863438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:29:08.966389    4126 docker.go:319] overlay module found
	I1213 08:29:08.969762    4126 out.go:99] Using the docker driver based on user configuration
	I1213 08:29:08.969804    4126 start.go:309] selected driver: docker
	I1213 08:29:08.969811    4126 start.go:927] validating driver "docker" against <nil>
	I1213 08:29:08.969919    4126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:29:09.032597    4126 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-13 08:29:09.024026257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:29:09.032752    4126 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:29:09.033057    4126 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 08:29:09.033233    4126 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:29:09.036526    4126 out.go:171] Using Docker driver with root privileges
	I1213 08:29:09.039640    4126 cni.go:84] Creating CNI manager for ""
	I1213 08:29:09.039713    4126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1213 08:29:09.039728    4126 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 08:29:09.039817    4126 start.go:353] cluster config:
	{Name:download-only-688736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-688736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:29:09.042876    4126 out.go:99] Starting "download-only-688736" primary control-plane node in "download-only-688736" cluster
	I1213 08:29:09.042902    4126 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1213 08:29:09.045982    4126 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:29:09.046036    4126 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1213 08:29:09.046092    4126 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:29:09.062163    4126 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 08:29:09.062319    4126 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 08:29:09.062422    4126 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 08:29:09.098540    4126 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1213 08:29:09.098578    4126 cache.go:65] Caching tarball of preloaded images
	I1213 08:29:09.098739    4126 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1213 08:29:09.102179    4126 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 08:29:09.102213    4126 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1213 08:29:09.186376    4126 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1213 08:29:09.186507    4126 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1213 08:29:12.106622    4126 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1213 08:29:12.107003    4126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/download-only-688736/config.json ...
	I1213 08:29:12.107038    4126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/download-only-688736/config.json: {Name:mkaf26fd5e4a8e63c0c9d6a9634f2add1fa08ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:12.107211    4126 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1213 08:29:12.107387    4126 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22128-2315/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-688736 host does not exist
	  To start a cluster, run: "minikube start -p download-only-688736"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-688736
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-928290 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-928290 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.353735794s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 08:29:17.820005    4120 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1213 08:29:17.820041    4120 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-928290
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-928290: exit status 85 (115.278138ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-688736 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-688736 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ delete  │ -p download-only-688736                                                                                                                                                               │ download-only-688736 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-928290 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-928290 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:29:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:29:14.508952    4324 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:29:14.509125    4324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:14.509136    4324 out.go:374] Setting ErrFile to fd 2...
	I1213 08:29:14.509142    4324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:14.509414    4324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:29:14.509815    4324 out.go:368] Setting JSON to true
	I1213 08:29:14.510563    4324 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":707,"bootTime":1765613848,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:29:14.510628    4324 start.go:143] virtualization:  
	I1213 08:29:14.513986    4324 out.go:99] [download-only-928290] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:29:14.514204    4324 notify.go:221] Checking for updates...
	I1213 08:29:14.517245    4324 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:29:14.520429    4324 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:29:14.523266    4324 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:29:14.526115    4324 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:29:14.528988    4324 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 08:29:14.534664    4324 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:29:14.534917    4324 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:29:14.561163    4324 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:29:14.561269    4324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:29:14.621338    4324 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-13 08:29:14.612169316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:29:14.621440    4324 docker.go:319] overlay module found
	I1213 08:29:14.624431    4324 out.go:99] Using the docker driver based on user configuration
	I1213 08:29:14.624469    4324 start.go:309] selected driver: docker
	I1213 08:29:14.624477    4324 start.go:927] validating driver "docker" against <nil>
	I1213 08:29:14.624571    4324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:29:14.679987    4324 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-13 08:29:14.670851465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:29:14.680136    4324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:29:14.680394    4324 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 08:29:14.680538    4324 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:29:14.683663    4324 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-928290 host does not exist
	  To start a cluster, run: "minikube start -p download-only-928290"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-928290
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-750522 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-750522 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.68779847s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 08:29:22.089869    4120 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 08:29:22.089905    4120 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-750522
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-750522: exit status 85 (89.10186ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-688736 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-688736 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ delete  │ -p download-only-688736                                                                                                                                                                      │ download-only-688736 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-928290 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-928290 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ delete  │ -p download-only-928290                                                                                                                                                                      │ download-only-928290 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │ 13 Dec 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-750522 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-750522 │ jenkins │ v1.37.0 │ 13 Dec 25 08:29 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:29:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:29:18.444770    4518 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:29:18.444893    4518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:18.444904    4518 out.go:374] Setting ErrFile to fd 2...
	I1213 08:29:18.444910    4518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:29:18.445150    4518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:29:18.445577    4518 out.go:368] Setting JSON to true
	I1213 08:29:18.446254    4518 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":711,"bootTime":1765613848,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:29:18.446316    4518 start.go:143] virtualization:  
	I1213 08:29:18.470868    4518 out.go:99] [download-only-750522] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:29:18.471194    4518 notify.go:221] Checking for updates...
	I1213 08:29:18.497729    4518 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:29:18.518061    4518 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:29:18.546591    4518 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:29:18.574801    4518 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:29:18.596894    4518 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 08:29:18.660956    4518 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:29:18.661234    4518 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:29:18.680549    4518 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:29:18.680650    4518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:29:18.745554    4518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 08:29:18.736665087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:29:18.745657    4518 docker.go:319] overlay module found
	I1213 08:29:18.757560    4518 out.go:99] Using the docker driver based on user configuration
	I1213 08:29:18.757603    4518 start.go:309] selected driver: docker
	I1213 08:29:18.757611    4518 start.go:927] validating driver "docker" against <nil>
	I1213 08:29:18.757726    4518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:29:18.830353    4518 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 08:29:18.819972647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:29:18.830508    4518 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:29:18.830772    4518 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 08:29:18.830920    4518 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:29:18.839914    4518 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-750522 host does not exist
	  To start a cluster, run: "minikube start -p download-only-750522"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-750522
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 08:29:23.375126    4120 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-725517 --alsologtostderr --binary-mirror http://127.0.0.1:46253 --driver=docker  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-725517" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-725517
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-289425
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-289425: exit status 85 (75.634127ms)

                                                
                                                
-- stdout --
	* Profile "addons-289425" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-289425"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-289425
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-289425: exit status 85 (66.152322ms)

                                                
                                                
-- stdout --
	* Profile "addons-289425" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-289425"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (169.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-289425 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-289425 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m49.592579699s)
--- PASS: TestAddons/Setup (169.59s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.75s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 53.190695ms
addons_test.go:878: volcano-admission stabilized in 53.576931ms
addons_test.go:870: volcano-scheduler stabilized in 54.105191ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-5l86l" [ff56896c-f9bf-4734-8dde-a99813b4ab6f] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003875607s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-bsxkr" [adbf0bf5-5546-458f-8316-71550a446020] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.002819638s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-7w6qg" [7467d80f-e4c7-459d-96f4-b939a3f78dd0] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005073877s
addons_test.go:905: (dbg) Run:  kubectl --context addons-289425 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-289425 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-289425 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [01aac884-827f-47c6-93fa-883db1c17988] Pending
helpers_test.go:353: "test-job-nginx-0" [01aac884-827f-47c6-93fa-883db1c17988] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [01aac884-827f-47c6-93fa-883db1c17988] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.00334682s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-289425 addons disable volcano --alsologtostderr -v=1: (12.075926623s)
--- PASS: TestAddons/serial/Volcano (41.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-289425 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-289425 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.9s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-289425 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-289425 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [253a75e2-010f-40a8-bf67-b9118cf87ad4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [253a75e2-010f-40a8-bf67-b9118cf87ad4] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003196306s
addons_test.go:696: (dbg) Run:  kubectl --context addons-289425 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-289425 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-289425 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-289425 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.90s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 4.884703ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-b2s8b" [9c005367-56b3-42d0-9536-16f343152154] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00331319s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-6wqgl" [7fe63465-4ef1-459f-9a9d-8dc507ac10bb] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004071831s
addons_test.go:394: (dbg) Run:  kubectl --context addons-289425 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-289425 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-289425 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.226992302s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 ip
2025/12/13 08:33:29 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.39s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.8s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.903056ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-289425
addons_test.go:334: (dbg) Run:  kubectl --context addons-289425 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.80s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-289425 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-289425 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-289425 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [77873684-8706-4bf3-9231-153b36bc167c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [77873684-8706-4bf3-9231-153b36bc167c] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 6.003859485s
I1213 08:33:56.915605    4120 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-289425 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-289425 addons disable ingress-dns --alsologtostderr -v=1: (1.659841613s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-289425 addons disable ingress --alsologtostderr -v=1: (8.361135915s)
--- PASS: TestAddons/parallel/Ingress (18.05s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-bbv4g" [d50cbf03-b16b-4304-ac58-28b412234745] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003798668s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-289425 addons disable inspektor-gadget --alsologtostderr -v=1: (5.839966985s)
--- PASS: TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.833314ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-g4bgc" [6f2e0b78-3977-4d8c-bcd2-7197960d10c2] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003419907s
addons_test.go:465: (dbg) Run:  kubectl --context addons-289425 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 08:33:30.790711    4120 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 08:33:30.795181    4120 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 08:33:30.795205    4120 kapi.go:107] duration metric: took 6.327081ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.337132ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-289425 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-289425 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [0b4641a5-f658-405b-987c-ba4ee66d7495] Pending
helpers_test.go:353: "task-pv-pod" [0b4641a5-f658-405b-987c-ba4ee66d7495] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [0b4641a5-f658-405b-987c-ba4ee66d7495] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004933158s
addons_test.go:574: (dbg) Run:  kubectl --context addons-289425 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-289425 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-289425 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-289425 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-289425 delete pod task-pv-pod: (1.335679362s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-289425 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-289425 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-289425 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [1fdfa272-f2be-491c-8e61-52e23611ba52] Pending
helpers_test.go:353: "task-pv-pod-restore" [1fdfa272-f2be-491c-8e61-52e23611ba52] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [1fdfa272-f2be-491c-8e61-52e23611ba52] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003251314s
addons_test.go:616: (dbg) Run:  kubectl --context addons-289425 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-289425 delete pod task-pv-pod-restore: (1.210242677s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-289425 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-289425 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-289425 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.837891571s)
--- PASS: TestAddons/parallel/CSI (62.43s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-289425 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-289425 --alsologtostderr -v=1: (1.033266547s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-l77q2" [dcbd9039-1854-405b-955b-382fd4a37df7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-l77q2" [dcbd9039-1854-405b-955b-382fd4a37df7] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004097698s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-289425 addons disable headlamp --alsologtostderr -v=1: (5.982852852s)
--- PASS: TestAddons/parallel/Headlamp (17.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-59hhx" [1750e662-4201-46cc-ac3b-55f95f8021e8] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002614536s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-289425 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-289425 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-289425 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [86ccd0c4-5508-4ebb-aa6b-4581885a478c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [86ccd0c4-5508-4ebb-aa6b-4581885a478c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [86ccd0c4-5508-4ebb-aa6b-4581885a478c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003098624s
addons_test.go:969: (dbg) Run:  kubectl --context addons-289425 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 ssh "cat /opt/local-path-provisioner/pvc-34792aad-dc1a-4bfa-8a99-ed3f3c38ba3b_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-289425 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-289425 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-289425 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.068657926s)
--- PASS: TestAddons/parallel/LocalPath (52.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-gpl7z" [25b4fbb8-3b4f-4a5f-88ba-7e14e0204340] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003544488s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-h6nnz" [3802c8b0-9d45-41bf-8d0e-4aefb82d1cee] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003420236s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-289425 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-289425 addons disable yakd --alsologtostderr -v=1: (5.742119768s)
--- PASS: TestAddons/parallel/Yakd (11.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-289425
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-289425: (12.084367949s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-289425
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-289425
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-289425
--- PASS: TestAddons/StoppedEnableDisable (12.36s)

                                                
                                    
x
+
TestCertOptions (34.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-993197 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-993197 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (31.474401384s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-993197 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-993197 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-993197 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-993197" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-993197
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-993197: (2.087409419s)
--- PASS: TestCertOptions (34.31s)

                                                
                                    
x
+
TestCertExpiration (220.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-482836 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1213 09:44:43.075623    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-482836 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (32.344103205s)
E1213 09:46:40.016492    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:47:14.443214    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-482836 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-482836 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.156604356s)
helpers_test.go:176: Cleaning up "cert-expiration-482836" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-482836
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-482836: (2.339804621s)
--- PASS: TestCertExpiration (220.84s)

                                                
                                    
x
+
TestForceSystemdFlag (36.06s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-202115 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-202115 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.659695013s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-202115 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-202115" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-202115
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-202115: (2.080525023s)
--- PASS: TestForceSystemdFlag (36.06s)

                                                
                                    
x
+
TestForceSystemdEnv (36.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-319338 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1213 09:43:51.888027    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-319338 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.270856967s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-319338 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-319338" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-319338
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-319338: (2.068450172s)
--- PASS: TestForceSystemdEnv (36.66s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.92s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-483925 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-483925 --driver=docker  --container-runtime=containerd: (28.846577267s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-483925"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-483925": (1.115974623s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-mhOQ75Auo1cy/agent.23490" SSH_AGENT_PID="23491" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-mhOQ75Auo1cy/agent.23490" SSH_AGENT_PID="23491" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-mhOQ75Auo1cy/agent.23490" SSH_AGENT_PID="23491" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.501305188s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-mhOQ75Auo1cy/agent.23490" SSH_AGENT_PID="23491" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:176: Cleaning up "dockerenv-483925" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-483925
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-483925: (2.050567978s)
--- PASS: TestDockerEnvContainerd (44.92s)

                                                
                                    
x
+
TestErrorSpam/setup (28.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-896460 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-896460 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-896460 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-896460 --driver=docker  --container-runtime=containerd: (28.331384616s)
--- PASS: TestErrorSpam/setup (28.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 stop: (1.436065027s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-896460 --log_dir /tmp/nospam-896460 stop
--- PASS: TestErrorSpam/stop (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.74s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-049633 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1213 08:37:14.450456    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:14.456847    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:14.468319    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:14.489739    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:14.531212    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:14.612553    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:14.773881    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:15.095220    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:15.736742    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:17.018420    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:19.580520    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:37:24.702168    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-049633 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (49.735801434s)
--- PASS: TestFunctional/serial/StartWithProxy (49.74s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 08:37:34.149296    4120 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-049633 --alsologtostderr -v=8
E1213 08:37:34.944754    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-049633 --alsologtostderr -v=8: (6.984276206s)
functional_test.go:678: soft start took 6.985598813s for "functional-049633" cluster.
I1213 08:37:41.133893    4120 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-049633 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-049633 cache add registry.k8s.io/pause:3.1: (1.310466129s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-049633 cache add registry.k8s.io/pause:3.3: (1.124379703s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-049633 cache add registry.k8s.io/pause:latest: (1.01979398s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-049633 /tmp/TestFunctionalserialCacheCmdcacheadd_local569089074/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 cache add minikube-local-cache-test:functional-049633
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 cache delete minikube-local-cache-test:functional-049633
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-049633
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-049633 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.875074ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 kubectl -- --context functional-049633 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-049633 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-049633 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 08:37:55.426177    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:38:36.388918    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-049633 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.811121153s)
functional_test.go:776: restart took 47.811209737s for "functional-049633" cluster.
I1213 08:38:36.747645    4120 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (47.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-049633 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-049633 logs: (1.475007374s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 logs --file /tmp/TestFunctionalserialLogsFileCmd3643671556/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-049633 logs --file /tmp/TestFunctionalserialLogsFileCmd3643671556/001/logs.txt: (1.465191595s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-049633 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-049633
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-049633: exit status 115 (598.246779ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32718 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-049633 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-049633 config get cpus: exit status 14 (65.464216ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-049633 config get cpus: exit status 14 (72.692766ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-049633 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-049633 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 39659: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-049633 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-049633 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (302.177457ms)

                                                
                                                
-- stdout --
	* [functional-049633] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:39:14.581371   38316 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:39:14.581609   38316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:14.581644   38316 out.go:374] Setting ErrFile to fd 2...
	I1213 08:39:14.581664   38316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:14.581991   38316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:39:14.582464   38316 out.go:368] Setting JSON to false
	I1213 08:39:14.583586   38316 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1307,"bootTime":1765613848,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:39:14.583664   38316 start.go:143] virtualization:  
	I1213 08:39:14.587745   38316 out.go:179] * [functional-049633] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 08:39:14.591493   38316 notify.go:221] Checking for updates...
	I1213 08:39:14.599803   38316 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:39:14.602917   38316 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:39:14.605958   38316 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:39:14.608935   38316 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:39:14.611926   38316 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 08:39:14.614912   38316 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:39:14.618394   38316 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 08:39:14.618966   38316 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:39:14.674295   38316 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:39:14.674477   38316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:39:14.785609   38316 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 08:39:14.775764216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:39:14.785708   38316 docker.go:319] overlay module found
	I1213 08:39:14.789014   38316 out.go:179] * Using the docker driver based on existing profile
	I1213 08:39:14.791779   38316 start.go:309] selected driver: docker
	I1213 08:39:14.791810   38316 start.go:927] validating driver "docker" against &{Name:functional-049633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-049633 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:39:14.791936   38316 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:39:14.795384   38316 out.go:203] 
	W1213 08:39:14.798249   38316 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 08:39:14.801125   38316 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-049633 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-049633 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-049633 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (205.873519ms)

                                                
                                                
-- stdout --
	* [functional-049633] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:39:18.458071   39405 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:39:18.458268   39405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:18.458295   39405 out.go:374] Setting ErrFile to fd 2...
	I1213 08:39:18.458312   39405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:18.459706   39405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 08:39:18.460333   39405 out.go:368] Setting JSON to false
	I1213 08:39:18.461398   39405 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1311,"bootTime":1765613848,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 08:39:18.461498   39405 start.go:143] virtualization:  
	I1213 08:39:18.466466   39405 out.go:179] * [functional-049633] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 08:39:18.470588   39405 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:39:18.470670   39405 notify.go:221] Checking for updates...
	I1213 08:39:18.474491   39405 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:39:18.477316   39405 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 08:39:18.480172   39405 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 08:39:18.482988   39405 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 08:39:18.485986   39405 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:39:18.489459   39405 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 08:39:18.490092   39405 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:39:18.525141   39405 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 08:39:18.525272   39405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:39:18.587448   39405 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 08:39:18.577821879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 08:39:18.587584   39405 docker.go:319] overlay module found
	I1213 08:39:18.590618   39405 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 08:39:18.593582   39405 start.go:309] selected driver: docker
	I1213 08:39:18.593603   39405 start.go:927] validating driver "docker" against &{Name:functional-049633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-049633 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:39:18.593701   39405 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:39:18.597305   39405 out.go:203] 
	W1213 08:39:18.600198   39405 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 08:39:18.603081   39405 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-049633 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-049633 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-nktjp" [a7c42480-f813-4f49-a955-0c52679d798c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-nktjp" [a7c42480-f813-4f49-a955-0c52679d798c] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003913693s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31432
functional_test.go:1680: http://192.168.49.2:31432: success! body:
Request served by hello-node-connect-7d85dfc575-nktjp

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31432
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [995925f3-a571-4a82-a633-d31e08271bff] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003755547s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-049633 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-049633 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-049633 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-049633 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [5c0d69ab-9a10-4c05-ac93-bb834930fe18] Pending
helpers_test.go:353: "sp-pod" [5c0d69ab-9a10-4c05-ac93-bb834930fe18] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [5c0d69ab-9a10-4c05-ac93-bb834930fe18] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003569574s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-049633 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-049633 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-049633 delete -f testdata/storage-provisioner/pod.yaml: (1.113744494s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-049633 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [cd9b7ee9-578e-4904-8ed3-1f487cc3f578] Pending
helpers_test.go:353: "sp-pod" [cd9b7ee9-578e-4904-8ed3-1f487cc3f578] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003817486s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-049633 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh -n functional-049633 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 cp functional-049633:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd674919905/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh -n functional-049633 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh -n functional-049633 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4120/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "sudo cat /etc/test/nested/copy/4120/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4120.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "sudo cat /etc/ssl/certs/4120.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4120.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "sudo cat /usr/share/ca-certificates/4120.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "sudo cat /etc/ssl/certs/41202.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "sudo cat /usr/share/ca-certificates/41202.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-049633 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-049633 ssh "sudo systemctl is-active docker": exit status 1 (434.486574ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-049633 ssh "sudo systemctl is-active crio": exit status 1 (342.690896ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-049633 version -o=json --components: (1.354429697s)
--- PASS: TestFunctional/parallel/Version/components (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-049633 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-049633
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-049633
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-049633 image ls --format short --alsologtostderr:
I1213 08:39:28.022179   41051 out.go:360] Setting OutFile to fd 1 ...
I1213 08:39:28.022335   41051 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:28.022362   41051 out.go:374] Setting ErrFile to fd 2...
I1213 08:39:28.022369   41051 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:28.022683   41051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 08:39:28.023360   41051 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 08:39:28.023566   41051 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 08:39:28.024172   41051 cli_runner.go:164] Run: docker container inspect functional-049633 --format={{.State.Status}}
I1213 08:39:28.053542   41051 ssh_runner.go:195] Run: systemctl --version
I1213 08:39:28.053596   41051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-049633
I1213 08:39:28.080949   41051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-049633/id_rsa Username:docker}
I1213 08:39:28.194702   41051 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-049633 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:10afed │ 23MB   │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/library/minikube-local-cache-test │ functional-049633  │ sha256:db4261 │ 992B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ docker.io/kicbase/echo-server               │ functional-049633  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:ce2d2c │ 2.17MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-049633 image ls --format table --alsologtostderr:
I1213 08:39:29.556502   41478 out.go:360] Setting OutFile to fd 1 ...
I1213 08:39:29.556630   41478 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:29.556642   41478 out.go:374] Setting ErrFile to fd 2...
I1213 08:39:29.556648   41478 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:29.556932   41478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 08:39:29.557604   41478 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 08:39:29.557848   41478 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 08:39:29.558424   41478 cli_runner.go:164] Run: docker container inspect functional-049633 --format={{.State.Status}}
I1213 08:39:29.575604   41478 ssh_runner.go:195] Run: systemctl --version
I1213 08:39:29.575668   41478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-049633
I1213 08:39:29.592794   41478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-049633/id_rsa Username:docker}
I1213 08:39:29.698128   41478 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-049633 image ls --format json --alsologtostderr:
[{"id":"sha256:db42618cd478ef7ef9d01e6a82b9ab1f740d59e4b8a5b13060aa761c8246fe78","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-049633"],"size":"992"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-049633","docker.io/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"
],"size":"21136588"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"24559643"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58
ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"},
{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef1
20ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22985759"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-049633 image ls --format json --alsologtostderr:
I1213 08:39:29.329636   41442 out.go:360] Setting OutFile to fd 1 ...
I1213 08:39:29.329835   41442 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:29.329860   41442 out.go:374] Setting ErrFile to fd 2...
I1213 08:39:29.329879   41442 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:29.330161   41442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 08:39:29.330795   41442 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 08:39:29.330959   41442 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 08:39:29.331553   41442 cli_runner.go:164] Run: docker container inspect functional-049633 --format={{.State.Status}}
I1213 08:39:29.350607   41442 ssh_runner.go:195] Run: systemctl --version
I1213 08:39:29.350747   41442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-049633
I1213 08:39:29.369721   41442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-049633/id_rsa Username:docker}
I1213 08:39:29.474658   41442 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-049633 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-049633
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22985759"
- id: sha256:db42618cd478ef7ef9d01e6a82b9ab1f740d59e4b8a5b13060aa761c8246fe78
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-049633
size: "992"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-049633 image ls --format yaml --alsologtostderr:
I1213 08:39:29.075841   41408 out.go:360] Setting OutFile to fd 1 ...
I1213 08:39:29.076021   41408 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:29.076030   41408 out.go:374] Setting ErrFile to fd 2...
I1213 08:39:29.076035   41408 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:29.076319   41408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 08:39:29.076965   41408 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 08:39:29.077098   41408 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 08:39:29.077670   41408 cli_runner.go:164] Run: docker container inspect functional-049633 --format={{.State.Status}}
I1213 08:39:29.097652   41408 ssh_runner.go:195] Run: systemctl --version
I1213 08:39:29.097715   41408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-049633
I1213 08:39:29.122478   41408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-049633/id_rsa Username:docker}
I1213 08:39:29.230106   41408 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-049633 ssh pgrep buildkitd: exit status 1 (404.546113ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image build -t localhost/my-image:functional-049633 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-049633 image build -t localhost/my-image:functional-049633 testdata/build --alsologtostderr: (3.32016665s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-049633 image build -t localhost/my-image:functional-049633 testdata/build --alsologtostderr:
I1213 08:39:28.718922   41277 out.go:360] Setting OutFile to fd 1 ...
I1213 08:39:28.719166   41277 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:28.719194   41277 out.go:374] Setting ErrFile to fd 2...
I1213 08:39:28.719211   41277 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:28.719532   41277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 08:39:28.720255   41277 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 08:39:28.725770   41277 config.go:182] Loaded profile config "functional-049633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1213 08:39:28.726447   41277 cli_runner.go:164] Run: docker container inspect functional-049633 --format={{.State.Status}}
I1213 08:39:28.779806   41277 ssh_runner.go:195] Run: systemctl --version
I1213 08:39:28.779869   41277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-049633
I1213 08:39:28.805052   41277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-049633/id_rsa Username:docker}
I1213 08:39:28.914440   41277 build_images.go:162] Building image from path: /tmp/build.2488192148.tar
I1213 08:39:28.914510   41277 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 08:39:28.923424   41277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2488192148.tar
I1213 08:39:28.928105   41277 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2488192148.tar: stat -c "%s %y" /var/lib/minikube/build/build.2488192148.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2488192148.tar': No such file or directory
I1213 08:39:28.928173   41277 ssh_runner.go:362] scp /tmp/build.2488192148.tar --> /var/lib/minikube/build/build.2488192148.tar (3072 bytes)
I1213 08:39:28.952779   41277 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2488192148
I1213 08:39:28.962529   41277 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2488192148 -xf /var/lib/minikube/build/build.2488192148.tar
I1213 08:39:28.974897   41277 containerd.go:394] Building image: /var/lib/minikube/build/build.2488192148
I1213 08:39:28.974960   41277 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2488192148 --local dockerfile=/var/lib/minikube/build/build.2488192148 --output type=image,name=localhost/my-image:functional-049633
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:9fdddb352648cf486bb109cb37af373ecdb482047f67e5d2a54418001d2d535f 0.0s done
#8 exporting config sha256:a7d159c1e40c89f5d798152e652f85b72028920e6559f7cd7ddf2143a385071d 0.0s done
#8 naming to localhost/my-image:functional-049633 done
#8 DONE 0.2s
I1213 08:39:31.926676   41277 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2488192148 --local dockerfile=/var/lib/minikube/build/build.2488192148 --output type=image,name=localhost/my-image:functional-049633: (2.951691473s)
I1213 08:39:31.926743   41277 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2488192148
I1213 08:39:31.934968   41277 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2488192148.tar
I1213 08:39:31.952035   41277 build_images.go:218] Built localhost/my-image:functional-049633 from /tmp/build.2488192148.tar
I1213 08:39:31.952079   41277 build_images.go:134] succeeded building to: functional-049633
I1213 08:39:31.952088   41277 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-049633
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image load --daemon kicbase/echo-server:functional-049633 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-049633 image load --daemon kicbase/echo-server:functional-049633 --alsologtostderr: (1.198200968s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image load --daemon kicbase/echo-server:functional-049633 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-049633 image load --daemon kicbase/echo-server:functional-049633 --alsologtostderr: (1.140520973s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-049633
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image load --daemon kicbase/echo-server:functional-049633 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-049633 image load --daemon kicbase/echo-server:functional-049633 --alsologtostderr: (1.102224068s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "439.083235ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "74.963053ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "487.685309ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "76.186963ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image save kicbase/echo-server:functional-049633 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-049633 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-049633 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-049633 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-049633 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 36966: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image rm kicbase/echo-server:functional-049633 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-049633 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-049633 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [77706a9d-41b3-4176-8f5c-c27a2dd57c33] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [77706a9d-41b3-4176-8f5c-c27a2dd57c33] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004320195s
I1213 08:39:00.335404    4120 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-049633
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 image save --daemon kicbase/echo-server:functional-049633 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-049633
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-049633 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.49.188 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-049633 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-049633 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-049633 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-z7bz2" [666e4f65-b020-40d3-8798-5d198fa2815c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-z7bz2" [666e4f65-b020-40d3-8798-5d198fa2815c] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004403632s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-049633 /tmp/TestFunctionalparallelMountCmdany-port3185320072/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765615155122236479" to /tmp/TestFunctionalparallelMountCmdany-port3185320072/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765615155122236479" to /tmp/TestFunctionalparallelMountCmdany-port3185320072/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765615155122236479" to /tmp/TestFunctionalparallelMountCmdany-port3185320072/001/test-1765615155122236479
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-049633 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (453.103839ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:39:15.577096    4120 retry.go:31] will retry after 668.905566ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 08:39 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 08:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 08:39 test-1765615155122236479
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh cat /mount-9p/test-1765615155122236479
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-049633 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [08d3b9e1-99d5-4bbc-99f7-997f732bffde] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [08d3b9e1-99d5-4bbc-99f7-997f732bffde] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [08d3b9e1-99d5-4bbc-99f7-997f732bffde] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003479831s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-049633 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-049633 /tmp/TestFunctionalparallelMountCmdany-port3185320072/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 service list -o json
functional_test.go:1504: Took "676.221648ms" to run "out/minikube-linux-arm64 -p functional-049633 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32562
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32562
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-049633 /tmp/TestFunctionalparallelMountCmdspecific-port3636403034/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-049633 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (459.904267ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:39:25.213619    4120 retry.go:31] will retry after 484.074309ms: exit status 1
2025/12/13 08:39:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-049633 /tmp/TestFunctionalparallelMountCmdspecific-port3636403034/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-049633 ssh "sudo umount -f /mount-9p": exit status 1 (304.421441ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-049633 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-049633 /tmp/TestFunctionalparallelMountCmdspecific-port3636403034/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-049633 ssh "findmnt -T" /mount1: exit status 1 (730.3679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:39:27.595305    4120 retry.go:31] will retry after 371.261542ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-049633 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-049633 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-049633
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-049633
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-049633
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-074420 cache add registry.k8s.io/pause:3.1: (1.152628079s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-074420 cache add registry.k8s.io/pause:3.3: (1.186283078s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-074420 cache add registry.k8s.io/pause:latest: (1.035882065s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3778175008/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 cache add minikube-local-cache-test:functional-074420
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 cache delete minikube-local-cache-test:functional-074420
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-074420
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.571322ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs683699661/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 config get cpus: exit status 14 (79.377763ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 config get cpus: exit status 14 (72.331642ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-074420 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-074420 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (209.959869ms)

                                                
                                                
-- stdout --
	* [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:08:35.049326   70768 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:08:35.049564   70768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:08:35.049591   70768 out.go:374] Setting ErrFile to fd 2...
	I1213 09:08:35.049608   70768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:08:35.049920   70768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:08:35.050391   70768 out.go:368] Setting JSON to false
	I1213 09:08:35.051227   70768 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3067,"bootTime":1765613848,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:08:35.051317   70768 start.go:143] virtualization:  
	I1213 09:08:35.054551   70768 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:08:35.058377   70768 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:08:35.058539   70768 notify.go:221] Checking for updates...
	I1213 09:08:35.061980   70768 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:08:35.065047   70768 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:08:35.069010   70768 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:08:35.071901   70768 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:08:35.074898   70768 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:08:35.078383   70768 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:08:35.079047   70768 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:08:35.118992   70768 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:08:35.119116   70768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:08:35.181476   70768 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:08:35.171599264 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:08:35.181583   70768 docker.go:319] overlay module found
	I1213 09:08:35.184665   70768 out.go:179] * Using the docker driver based on existing profile
	I1213 09:08:35.187608   70768 start.go:309] selected driver: docker
	I1213 09:08:35.187634   70768 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:08:35.187737   70768 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:08:35.191256   70768 out.go:203] 
	W1213 09:08:35.194144   70768 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 09:08:35.196980   70768 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-074420 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-074420 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-074420 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (184.79548ms)

                                                
                                                
-- stdout --
	* [functional-074420] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:08:34.857571   70723 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:08:34.857749   70723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:08:34.857759   70723 out.go:374] Setting ErrFile to fd 2...
	I1213 09:08:34.857764   70723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:08:34.858119   70723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:08:34.858538   70723 out.go:368] Setting JSON to false
	I1213 09:08:34.859367   70723 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3067,"bootTime":1765613848,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:08:34.859433   70723 start.go:143] virtualization:  
	I1213 09:08:34.864681   70723 out.go:179] * [functional-074420] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 09:08:34.867613   70723 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:08:34.867714   70723 notify.go:221] Checking for updates...
	I1213 09:08:34.873273   70723 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:08:34.876184   70723 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:08:34.879027   70723 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:08:34.881994   70723 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:08:34.884866   70723 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:08:34.888219   70723 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:08:34.888848   70723 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:08:34.917133   70723 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:08:34.917300   70723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:08:34.971952   70723 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:08:34.962683485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:08:34.972062   70723 docker.go:319] overlay module found
	I1213 09:08:34.975300   70723 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 09:08:34.978046   70723 start.go:309] selected driver: docker
	I1213 09:08:34.978063   70723 start.go:927] validating driver "docker" against &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:08:34.978173   70723 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:08:34.981623   70723 out.go:203] 
	W1213 09:08:34.984439   70723 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 09:08:34.987173   70723 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh -n functional-074420 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 cp functional-074420:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2412260855/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh -n functional-074420 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh -n functional-074420 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4120/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "sudo cat /etc/test/nested/copy/4120/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4120.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "sudo cat /etc/ssl/certs/4120.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4120.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "sudo cat /usr/share/ca-certificates/4120.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "sudo cat /etc/ssl/certs/41202.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "sudo cat /usr/share/ca-certificates/41202.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 ssh "sudo systemctl is-active docker": exit status 1 (287.42866ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 ssh "sudo systemctl is-active crio": exit status 1 (269.281354ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-074420 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-074420 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "335.570023ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "52.777411ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "318.096044ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "50.285996ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1439982161/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.50792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 09:08:28.150768    4120 retry.go:31] will retry after 554.72162ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1439982161/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 ssh "sudo umount -f /mount-9p": exit status 1 (263.784947ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-074420 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1439982161/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 ssh "findmnt -T" /mount1: exit status 1 (590.962093ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 09:08:30.349606    4120 retry.go:31] will retry after 567.018346ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-074420 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-074420 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2297471746/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-074420 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-074420
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-074420
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-074420 image ls --format short --alsologtostderr:
I1213 09:08:47.856425   72944 out.go:360] Setting OutFile to fd 1 ...
I1213 09:08:47.856554   72944 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:47.856574   72944 out.go:374] Setting ErrFile to fd 2...
I1213 09:08:47.856582   72944 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:47.856845   72944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 09:08:47.857467   72944 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:08:47.857598   72944 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:08:47.858125   72944 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
I1213 09:08:47.875241   72944 ssh_runner.go:195] Run: systemctl --version
I1213 09:08:47.875301   72944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 09:08:47.892875   72944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
I1213 09:08:47.998037   72944 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-074420 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-074420  │ sha256:db4261 │ 992B   │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:ccd634 │ 24.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:404c2e │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:163787 │ 15.4MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ localhost/my-image                          │ functional-074420  │ sha256:0edb90 │ 831kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/kicbase/echo-server               │ functional-074420  │ sha256:ce2d2c │ 2.17MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-074420 image ls --format table --alsologtostderr:
I1213 09:08:52.085966   73337 out.go:360] Setting OutFile to fd 1 ...
I1213 09:08:52.086133   73337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:52.086162   73337 out.go:374] Setting ErrFile to fd 2...
I1213 09:08:52.086180   73337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:52.086563   73337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 09:08:52.087885   73337 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:08:52.088057   73337 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:08:52.088644   73337 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
I1213 09:08:52.106131   73337 ssh_runner.go:195] Run: systemctl --version
I1213 09:08:52.106185   73337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 09:08:52.127830   73337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
I1213 09:08:52.229743   73337 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image ls --format json --alsologtostderr
E1213 09:08:51.888693    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-074420 image ls --format json --alsologtostderr:
[{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:db42618cd478ef7ef9d01e6a82b9ab1f740d59e4b8a5b13060aa761c8246fe78","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-074420"],"size":"992"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:0edb90c0573c14c80840840e3427f11b2e2753cf04732bf7213aace1811bc560","repoDigests":[],"repoTags":["localhost/my-image:functional-074420"],"size":"830617"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy
@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22429671"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24678359"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"i
d":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-074420"],"size":"2173567"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20661043"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:163787
41539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15391364"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-074420 image ls --format json --alsologtostderr:
I1213 09:08:51.860299   73301 out.go:360] Setting OutFile to fd 1 ...
I1213 09:08:51.860405   73301 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:51.860421   73301 out.go:374] Setting ErrFile to fd 2...
I1213 09:08:51.860427   73301 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:51.860675   73301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 09:08:51.861278   73301 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:08:51.861398   73301 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:08:51.861904   73301 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
I1213 09:08:51.878319   73301 ssh_runner.go:195] Run: systemctl --version
I1213 09:08:51.878387   73301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 09:08:51.895137   73301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
I1213 09:08:51.997794   73301 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-074420 image ls --format yaml --alsologtostderr:
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-074420
size: "2173567"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:db42618cd478ef7ef9d01e6a82b9ab1f740d59e4b8a5b13060aa761c8246fe78
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-074420
size: "992"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24678359"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15391364"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20661043"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22429671"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-074420 image ls --format yaml --alsologtostderr:
I1213 09:08:48.084982   72982 out.go:360] Setting OutFile to fd 1 ...
I1213 09:08:48.085152   72982 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:48.085184   72982 out.go:374] Setting ErrFile to fd 2...
I1213 09:08:48.085206   72982 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:48.085465   72982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 09:08:48.086105   72982 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:08:48.086268   72982 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:08:48.086831   72982 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
I1213 09:08:48.105387   72982 ssh_runner.go:195] Run: systemctl --version
I1213 09:08:48.105448   72982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 09:08:48.123715   72982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
I1213 09:08:48.226287   72982 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-074420 ssh pgrep buildkitd: exit status 1 (264.858388ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image build -t localhost/my-image:functional-074420 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-074420 image build -t localhost/my-image:functional-074420 testdata/build --alsologtostderr: (3.053873464s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-074420 image build -t localhost/my-image:functional-074420 testdata/build --alsologtostderr:
I1213 09:08:48.594082   73088 out.go:360] Setting OutFile to fd 1 ...
I1213 09:08:48.594293   73088 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:48.594316   73088 out.go:374] Setting ErrFile to fd 2...
I1213 09:08:48.594337   73088 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 09:08:48.594613   73088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 09:08:48.595285   73088 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:08:48.596034   73088 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 09:08:48.596620   73088 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
I1213 09:08:48.614511   73088 ssh_runner.go:195] Run: systemctl --version
I1213 09:08:48.614573   73088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 09:08:48.632750   73088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
I1213 09:08:48.738148   73088 build_images.go:162] Building image from path: /tmp/build.1886911670.tar
I1213 09:08:48.738217   73088 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 09:08:48.746119   73088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1886911670.tar
I1213 09:08:48.749737   73088 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1886911670.tar: stat -c "%s %y" /var/lib/minikube/build/build.1886911670.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1886911670.tar': No such file or directory
I1213 09:08:48.749771   73088 ssh_runner.go:362] scp /tmp/build.1886911670.tar --> /var/lib/minikube/build/build.1886911670.tar (3072 bytes)
I1213 09:08:48.767238   73088 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1886911670
I1213 09:08:48.775293   73088 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1886911670 -xf /var/lib/minikube/build/build.1886911670.tar
I1213 09:08:48.783216   73088 containerd.go:394] Building image: /var/lib/minikube/build/build.1886911670
I1213 09:08:48.783337   73088 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1886911670 --local dockerfile=/var/lib/minikube/build/build.1886911670 --output type=image,name=localhost/my-image:functional-074420
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:800b858f57224da6cff47c368e04b74c67c3f8d32746a84c1fce80631b6108a7
#8 exporting manifest sha256:800b858f57224da6cff47c368e04b74c67c3f8d32746a84c1fce80631b6108a7 0.0s done
#8 exporting config sha256:0edb90c0573c14c80840840e3427f11b2e2753cf04732bf7213aace1811bc560 0.0s done
#8 naming to localhost/my-image:functional-074420 done
#8 DONE 0.2s
I1213 09:08:51.556669   73088 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1886911670 --local dockerfile=/var/lib/minikube/build/build.1886911670 --output type=image,name=localhost/my-image:functional-074420: (2.773301305s)
I1213 09:08:51.556749   73088 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1886911670
I1213 09:08:51.566088   73088 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1886911670.tar
I1213 09:08:51.573443   73088 build_images.go:218] Built localhost/my-image:functional-074420 from /tmp/build.1886911670.tar
I1213 09:08:51.573476   73088 build_images.go:134] succeeded building to: functional-074420
I1213 09:08:51.573482   73088 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-074420
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image load --daemon kicbase/echo-server:functional-074420 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image load --daemon kicbase/echo-server:functional-074420 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-074420
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image load --daemon kicbase/echo-server:functional-074420 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image save kicbase/echo-server:functional-074420 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image rm kicbase/echo-server:functional-074420 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-074420
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 image save --daemon kicbase/echo-server:functional-074420 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-074420
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-074420 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-074420
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-074420
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-074420
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (185.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1213 09:11:40.007684    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:40.014119    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:40.025491    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:40.046849    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:40.088171    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:40.169446    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:40.330735    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:40.652445    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:41.294500    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:42.575896    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:45.137344    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:50.259091    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:12:00.504420    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:12:14.443725    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:12:20.985844    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:13:01.947677    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m4.366227291s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (185.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- rollout status deployment/busybox
E1213 09:13:51.888737    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 kubectl -- rollout status deployment/busybox: (4.753711398s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-4899w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-8ggg7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-gnc47 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-4899w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-8ggg7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-gnc47 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-4899w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-8ggg7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-gnc47 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-4899w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-4899w -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-8ggg7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-8ggg7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-gnc47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 kubectl -- exec busybox-7b57f96db7-gnc47 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 node add --alsologtostderr -v 5
E1213 09:14:23.869012    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 node add --alsologtostderr -v 5: (58.17899679s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5: (1.082098581s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-203458 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.088991195s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 status --output json --alsologtostderr -v 5: (1.541921453s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp testdata/cp-test.txt ha-203458:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2843742943/001/cp-test_ha-203458.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458:/home/docker/cp-test.txt ha-203458-m02:/home/docker/cp-test_ha-203458_ha-203458-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m02 "sudo cat /home/docker/cp-test_ha-203458_ha-203458-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458:/home/docker/cp-test.txt ha-203458-m03:/home/docker/cp-test_ha-203458_ha-203458-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m03 "sudo cat /home/docker/cp-test_ha-203458_ha-203458-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458:/home/docker/cp-test.txt ha-203458-m04:/home/docker/cp-test_ha-203458_ha-203458-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m04 "sudo cat /home/docker/cp-test_ha-203458_ha-203458-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp testdata/cp-test.txt ha-203458-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2843742943/001/cp-test_ha-203458-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458-m02:/home/docker/cp-test.txt ha-203458:/home/docker/cp-test_ha-203458-m02_ha-203458.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458 "sudo cat /home/docker/cp-test_ha-203458-m02_ha-203458.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458-m02:/home/docker/cp-test.txt ha-203458-m03:/home/docker/cp-test_ha-203458-m02_ha-203458-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m03 "sudo cat /home/docker/cp-test_ha-203458-m02_ha-203458-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458-m02:/home/docker/cp-test.txt ha-203458-m04:/home/docker/cp-test_ha-203458-m02_ha-203458-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m04 "sudo cat /home/docker/cp-test_ha-203458-m02_ha-203458-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp testdata/cp-test.txt ha-203458-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2843742943/001/cp-test_ha-203458-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458-m03:/home/docker/cp-test.txt ha-203458:/home/docker/cp-test_ha-203458-m03_ha-203458.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458 "sudo cat /home/docker/cp-test_ha-203458-m03_ha-203458.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458-m03:/home/docker/cp-test.txt ha-203458-m02:/home/docker/cp-test_ha-203458-m03_ha-203458-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m02 "sudo cat /home/docker/cp-test_ha-203458-m03_ha-203458-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458-m03:/home/docker/cp-test.txt ha-203458-m04:/home/docker/cp-test_ha-203458-m03_ha-203458-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m04 "sudo cat /home/docker/cp-test_ha-203458-m03_ha-203458-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp testdata/cp-test.txt ha-203458-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2843742943/001/cp-test_ha-203458-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458-m04:/home/docker/cp-test.txt ha-203458:/home/docker/cp-test_ha-203458-m04_ha-203458.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458 "sudo cat /home/docker/cp-test_ha-203458-m04_ha-203458.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458-m04:/home/docker/cp-test.txt ha-203458-m02:/home/docker/cp-test_ha-203458-m04_ha-203458-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m02 "sudo cat /home/docker/cp-test_ha-203458-m04_ha-203458-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 cp ha-203458-m04:/home/docker/cp-test.txt ha-203458-m03:/home/docker/cp-test_ha-203458-m04_ha-203458-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 ssh -n ha-203458-m03 "sudo cat /home/docker/cp-test_ha-203458-m04_ha-203458-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 node stop m02 --alsologtostderr -v 5: (12.147552104s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5: exit status 7 (822.676763ms)

                                                
                                                
-- stdout --
	ha-203458
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-203458-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-203458-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-203458-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:15:32.736787   90843 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:15:32.736924   90843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:15:32.736936   90843 out.go:374] Setting ErrFile to fd 2...
	I1213 09:15:32.736941   90843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:15:32.737201   90843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:15:32.737381   90843 out.go:368] Setting JSON to false
	I1213 09:15:32.737424   90843 mustload.go:66] Loading cluster: ha-203458
	I1213 09:15:32.737501   90843 notify.go:221] Checking for updates...
	I1213 09:15:32.738464   90843 config.go:182] Loaded profile config "ha-203458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 09:15:32.738535   90843 status.go:174] checking status of ha-203458 ...
	I1213 09:15:32.739128   90843 cli_runner.go:164] Run: docker container inspect ha-203458 --format={{.State.Status}}
	I1213 09:15:32.762815   90843 status.go:371] ha-203458 host status = "Running" (err=<nil>)
	I1213 09:15:32.762844   90843 host.go:66] Checking if "ha-203458" exists ...
	I1213 09:15:32.763289   90843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-203458
	I1213 09:15:32.790215   90843 host.go:66] Checking if "ha-203458" exists ...
	I1213 09:15:32.790521   90843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:15:32.790575   90843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-203458
	I1213 09:15:32.821313   90843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/ha-203458/id_rsa Username:docker}
	I1213 09:15:32.929340   90843 ssh_runner.go:195] Run: systemctl --version
	I1213 09:15:32.936166   90843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:15:32.956144   90843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:15:33.026811   90843 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-13 09:15:33.015497233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:15:33.027622   90843 kubeconfig.go:125] found "ha-203458" server: "https://192.168.49.254:8443"
	I1213 09:15:33.027673   90843 api_server.go:166] Checking apiserver status ...
	I1213 09:15:33.027748   90843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:15:33.049146   90843 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup
	I1213 09:15:33.058068   90843 api_server.go:182] apiserver freezer: "7:freezer:/docker/e6f8f5a461aa65bedce9df952f767ca12c06b4905af0f5286a03ab06ad7764ec/kubepods/burstable/pod8786dd01548af5a56a96c2b1b86cbe1a/9e3880d3750c652b86c0e1d1e0c003f52ff7b6ee9d8add27551f76446341bd5d"
	I1213 09:15:33.058160   90843 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e6f8f5a461aa65bedce9df952f767ca12c06b4905af0f5286a03ab06ad7764ec/kubepods/burstable/pod8786dd01548af5a56a96c2b1b86cbe1a/9e3880d3750c652b86c0e1d1e0c003f52ff7b6ee9d8add27551f76446341bd5d/freezer.state
	I1213 09:15:33.066229   90843 api_server.go:204] freezer state: "THAWED"
	I1213 09:15:33.066259   90843 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 09:15:33.074651   90843 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 09:15:33.074700   90843 status.go:463] ha-203458 apiserver status = Running (err=<nil>)
	I1213 09:15:33.074710   90843 status.go:176] ha-203458 status: &{Name:ha-203458 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:15:33.074728   90843 status.go:174] checking status of ha-203458-m02 ...
	I1213 09:15:33.075058   90843 cli_runner.go:164] Run: docker container inspect ha-203458-m02 --format={{.State.Status}}
	I1213 09:15:33.094347   90843 status.go:371] ha-203458-m02 host status = "Stopped" (err=<nil>)
	I1213 09:15:33.094374   90843 status.go:384] host is not running, skipping remaining checks
	I1213 09:15:33.094381   90843 status.go:176] ha-203458-m02 status: &{Name:ha-203458-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:15:33.094409   90843 status.go:174] checking status of ha-203458-m03 ...
	I1213 09:15:33.094734   90843 cli_runner.go:164] Run: docker container inspect ha-203458-m03 --format={{.State.Status}}
	I1213 09:15:33.114843   90843 status.go:371] ha-203458-m03 host status = "Running" (err=<nil>)
	I1213 09:15:33.114872   90843 host.go:66] Checking if "ha-203458-m03" exists ...
	I1213 09:15:33.115186   90843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-203458-m03
	I1213 09:15:33.132569   90843 host.go:66] Checking if "ha-203458-m03" exists ...
	I1213 09:15:33.132885   90843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:15:33.132932   90843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-203458-m03
	I1213 09:15:33.150609   90843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/ha-203458-m03/id_rsa Username:docker}
	I1213 09:15:33.260861   90843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:15:33.275650   90843 kubeconfig.go:125] found "ha-203458" server: "https://192.168.49.254:8443"
	I1213 09:15:33.275680   90843 api_server.go:166] Checking apiserver status ...
	I1213 09:15:33.275723   90843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:15:33.288711   90843 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1398/cgroup
	I1213 09:15:33.303199   90843 api_server.go:182] apiserver freezer: "7:freezer:/docker/ee20a8ee4f1aadec3fbcd4c8bd0ac9a92b8127c5b9661e459086f089e0658c56/kubepods/burstable/pod4ab05006843e6b60cbf616917e5b1e69/53cbc1a0119c794916d34c94c8fb1e128c088859d307953bed69031f5529546f"
	I1213 09:15:33.303271   90843 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ee20a8ee4f1aadec3fbcd4c8bd0ac9a92b8127c5b9661e459086f089e0658c56/kubepods/burstable/pod4ab05006843e6b60cbf616917e5b1e69/53cbc1a0119c794916d34c94c8fb1e128c088859d307953bed69031f5529546f/freezer.state
	I1213 09:15:33.312875   90843 api_server.go:204] freezer state: "THAWED"
	I1213 09:15:33.312901   90843 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 09:15:33.321218   90843 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 09:15:33.321247   90843 status.go:463] ha-203458-m03 apiserver status = Running (err=<nil>)
	I1213 09:15:33.321256   90843 status.go:176] ha-203458-m03 status: &{Name:ha-203458-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:15:33.321281   90843 status.go:174] checking status of ha-203458-m04 ...
	I1213 09:15:33.321593   90843 cli_runner.go:164] Run: docker container inspect ha-203458-m04 --format={{.State.Status}}
	I1213 09:15:33.339372   90843 status.go:371] ha-203458-m04 host status = "Running" (err=<nil>)
	I1213 09:15:33.339393   90843 host.go:66] Checking if "ha-203458-m04" exists ...
	I1213 09:15:33.339811   90843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-203458-m04
	I1213 09:15:33.356635   90843 host.go:66] Checking if "ha-203458-m04" exists ...
	I1213 09:15:33.356940   90843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:15:33.356983   90843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-203458-m04
	I1213 09:15:33.383035   90843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/ha-203458-m04/id_rsa Username:docker}
	I1213 09:15:33.493069   90843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:15:33.506197   90843 status.go:176] ha-203458-m04 status: &{Name:ha-203458-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 node start m02 --alsologtostderr -v 5: (11.691641774s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5: (1.598741887s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.338011883s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 stop --alsologtostderr -v 5: (37.717747895s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 start --wait true --alsologtostderr -v 5
E1213 09:16:40.005425    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:16:54.952689    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:17:07.712301    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:17:14.443093    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 start --wait true --alsologtostderr -v 5: (1m8.947767494s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 node delete m03 --alsologtostderr -v 5: (10.22988309s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 stop --alsologtostderr -v 5: (36.553628667s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5: exit status 7 (109.480817ms)

                                                
                                                
-- stdout --
	ha-203458
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-203458-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-203458-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:18:24.603954  105662 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:18:24.604121  105662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:18:24.604153  105662 out.go:374] Setting ErrFile to fd 2...
	I1213 09:18:24.604174  105662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:18:24.604568  105662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:18:24.604819  105662 out.go:368] Setting JSON to false
	I1213 09:18:24.604872  105662 mustload.go:66] Loading cluster: ha-203458
	I1213 09:18:24.605577  105662 config.go:182] Loaded profile config "ha-203458": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 09:18:24.605632  105662 status.go:174] checking status of ha-203458 ...
	I1213 09:18:24.606390  105662 cli_runner.go:164] Run: docker container inspect ha-203458 --format={{.State.Status}}
	I1213 09:18:24.607107  105662 notify.go:221] Checking for updates...
	I1213 09:18:24.626628  105662 status.go:371] ha-203458 host status = "Stopped" (err=<nil>)
	I1213 09:18:24.626660  105662 status.go:384] host is not running, skipping remaining checks
	I1213 09:18:24.626668  105662 status.go:176] ha-203458 status: &{Name:ha-203458 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:18:24.626696  105662 status.go:174] checking status of ha-203458-m02 ...
	I1213 09:18:24.627020  105662 cli_runner.go:164] Run: docker container inspect ha-203458-m02 --format={{.State.Status}}
	I1213 09:18:24.646769  105662 status.go:371] ha-203458-m02 host status = "Stopped" (err=<nil>)
	I1213 09:18:24.646790  105662 status.go:384] host is not running, skipping remaining checks
	I1213 09:18:24.646797  105662 status.go:176] ha-203458-m02 status: &{Name:ha-203458-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:18:24.646817  105662 status.go:174] checking status of ha-203458-m04 ...
	I1213 09:18:24.647106  105662 cli_runner.go:164] Run: docker container inspect ha-203458-m04 --format={{.State.Status}}
	I1213 09:18:24.670989  105662 status.go:371] ha-203458-m04 host status = "Stopped" (err=<nil>)
	I1213 09:18:24.671009  105662 status.go:384] host is not running, skipping remaining checks
	I1213 09:18:24.671016  105662 status.go:176] ha-203458-m04 status: &{Name:ha-203458-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1213 09:18:51.887768    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.669947785s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 node add --control-plane --alsologtostderr -v 5: (1m15.369745382s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-203458 status --alsologtostderr -v 5: (1.14385007s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.10717868s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-498194 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1213 09:21:40.008083    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-498194 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m19.35218003s)
--- PASS: TestJSONOutput/start/Command (79.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-498194 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-498194 --output=json --user=testUser
E1213 09:22:14.443005    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-498194 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-498194 --output=json --user=testUser: (5.977468461s)
--- PASS: TestJSONOutput/stop/Command (5.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-842920 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-842920 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (87.853797ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a3343cb5-cea9-41db-84ba-ae482abbc993","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-842920] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a3c21e0-155f-4aa4-a7d9-e96ffe5dcf4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22128"}}
	{"specversion":"1.0","id":"8fa5cf71-63a8-4342-afe2-62bbe2d980df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9de130ab-da17-4a1d-8cb0-910c1c029d76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig"}}
	{"specversion":"1.0","id":"df97faeb-c250-4a60-9334-2b777e5de1be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube"}}
	{"specversion":"1.0","id":"0654b883-93a8-46b1-a0c3-c3dbc17bb134","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"46b4a4cc-b794-420f-9a82-6ad7f3bdaa24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3e2bf74a-870e-423b-a632-fffd5a82645d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-842920" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-842920
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-559887 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-559887 --network=: (36.109693777s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-559887" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-559887
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-559887: (2.214763044s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.34s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.54s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-874075 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-874075 --network=bridge: (34.479695076s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-874075" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-874075
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-874075: (2.038199807s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.54s)

                                                
                                    
x
+
TestKicExistingNetwork (37.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1213 09:23:41.507593    4120 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 09:23:41.527313    4120 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 09:23:41.527388    4120 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1213 09:23:41.527404    4120 cli_runner.go:164] Run: docker network inspect existing-network
W1213 09:23:41.543714    4120 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1213 09:23:41.543748    4120 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1213 09:23:41.543763    4120 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1213 09:23:41.543867    4120 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 09:23:41.559216    4120 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c365c32601a1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:cc:58:ff:fb:ac} reservation:<nil>}
I1213 09:23:41.559473    4120 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40027a70d0}
I1213 09:23:41.559492    4120 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1213 09:23:41.559605    4120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1213 09:23:41.621350    4120 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-376702 --network=existing-network
E1213 09:23:51.887698    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-376702 --network=existing-network: (35.17593357s)
helpers_test.go:176: Cleaning up "existing-network-376702" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-376702
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-376702: (2.103562649s)
I1213 09:24:18.916954    4120 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.43s)

                                                
                                    
x
+
TestKicCustomSubnet (35.25s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-865332 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-865332 --subnet=192.168.60.0/24: (32.964923404s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-865332 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-865332" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-865332
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-865332: (2.260914169s)
--- PASS: TestKicCustomSubnet (35.25s)

                                                
                                    
x
+
TestKicStaticIP (33.53s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-454878 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-454878 --static-ip=192.168.200.200: (31.125173049s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-454878 ip
helpers_test.go:176: Cleaning up "static-ip-454878" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-454878
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-454878: (2.249588637s)
--- PASS: TestKicStaticIP (33.53s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-759860 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-759860 --driver=docker  --container-runtime=containerd: (30.888639061s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-762544 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-762544 --driver=docker  --container-runtime=containerd: (31.579705276s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-759860
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-762544
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-762544" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-762544
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-762544: (2.088341778s)
helpers_test.go:176: Cleaning up "first-759860" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-759860
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-759860: (2.332460122s)
--- PASS: TestMinikubeProfile (68.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-392319 --memory=3072 --mount-string /tmp/TestMountStartserial2704384533/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E1213 09:26:40.004876    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-392319 --memory=3072 --mount-string /tmp/TestMountStartserial2704384533/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.934522641s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-392319 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-393977 --memory=3072 --mount-string /tmp/TestMountStartserial2704384533/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-393977 --memory=3072 --mount-string /tmp/TestMountStartserial2704384533/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.45774617s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-393977 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-392319 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-392319 --alsologtostderr -v=5: (1.709114318s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-393977 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-393977
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-393977: (1.297659215s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.49s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-393977
E1213 09:26:57.519668    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-393977: (6.489000195s)
--- PASS: TestMountStart/serial/RestartStopped (7.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-393977 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-802240 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1213 09:27:14.443672    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:28:03.074214    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-802240 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m44.05546532s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- rollout status deployment/busybox
E1213 09:28:51.888726    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-802240 -- rollout status deployment/busybox: (3.184278536s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- exec busybox-7b57f96db7-bd877 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- exec busybox-7b57f96db7-n25qf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- exec busybox-7b57f96db7-bd877 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- exec busybox-7b57f96db7-n25qf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- exec busybox-7b57f96db7-bd877 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- exec busybox-7b57f96db7-n25qf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.05s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- exec busybox-7b57f96db7-bd877 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- exec busybox-7b57f96db7-bd877 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- exec busybox-7b57f96db7-n25qf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-802240 -- exec busybox-7b57f96db7-n25qf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-802240 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-802240 -v=5 --alsologtostderr: (26.701843541s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.42s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-802240 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 cp testdata/cp-test.txt multinode-802240:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 cp multinode-802240:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2740311204/001/cp-test_multinode-802240.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 cp multinode-802240:/home/docker/cp-test.txt multinode-802240-m02:/home/docker/cp-test_multinode-802240_multinode-802240-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240-m02 "sudo cat /home/docker/cp-test_multinode-802240_multinode-802240-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 cp multinode-802240:/home/docker/cp-test.txt multinode-802240-m03:/home/docker/cp-test_multinode-802240_multinode-802240-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240-m03 "sudo cat /home/docker/cp-test_multinode-802240_multinode-802240-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 cp testdata/cp-test.txt multinode-802240-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 cp multinode-802240-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2740311204/001/cp-test_multinode-802240-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 cp multinode-802240-m02:/home/docker/cp-test.txt multinode-802240:/home/docker/cp-test_multinode-802240-m02_multinode-802240.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240 "sudo cat /home/docker/cp-test_multinode-802240-m02_multinode-802240.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 cp multinode-802240-m02:/home/docker/cp-test.txt multinode-802240-m03:/home/docker/cp-test_multinode-802240-m02_multinode-802240-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240-m03 "sudo cat /home/docker/cp-test_multinode-802240-m02_multinode-802240-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 cp testdata/cp-test.txt multinode-802240-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 cp multinode-802240-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2740311204/001/cp-test_multinode-802240-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 cp multinode-802240-m03:/home/docker/cp-test.txt multinode-802240:/home/docker/cp-test_multinode-802240-m03_multinode-802240.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240 "sudo cat /home/docker/cp-test_multinode-802240-m03_multinode-802240.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 cp multinode-802240-m03:/home/docker/cp-test.txt multinode-802240-m02:/home/docker/cp-test_multinode-802240-m03_multinode-802240-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 ssh -n multinode-802240-m02 "sudo cat /home/docker/cp-test_multinode-802240-m03_multinode-802240-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-802240 node stop m03: (1.3346921s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-802240 status: exit status 7 (566.406075ms)

                                                
                                                
-- stdout --
	multinode-802240
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-802240-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-802240-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-802240 status --alsologtostderr: exit status 7 (545.245777ms)

                                                
                                                
-- stdout --
	multinode-802240
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-802240-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-802240-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:29:35.297871  159033 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:29:35.297975  159033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:29:35.297985  159033 out.go:374] Setting ErrFile to fd 2...
	I1213 09:29:35.297991  159033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:29:35.298244  159033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:29:35.298424  159033 out.go:368] Setting JSON to false
	I1213 09:29:35.298455  159033 mustload.go:66] Loading cluster: multinode-802240
	I1213 09:29:35.298561  159033 notify.go:221] Checking for updates...
	I1213 09:29:35.298859  159033 config.go:182] Loaded profile config "multinode-802240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 09:29:35.298881  159033 status.go:174] checking status of multinode-802240 ...
	I1213 09:29:35.299835  159033 cli_runner.go:164] Run: docker container inspect multinode-802240 --format={{.State.Status}}
	I1213 09:29:35.323186  159033 status.go:371] multinode-802240 host status = "Running" (err=<nil>)
	I1213 09:29:35.323209  159033 host.go:66] Checking if "multinode-802240" exists ...
	I1213 09:29:35.323504  159033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-802240
	I1213 09:29:35.352749  159033 host.go:66] Checking if "multinode-802240" exists ...
	I1213 09:29:35.353153  159033 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:29:35.353201  159033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-802240
	I1213 09:29:35.370754  159033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/multinode-802240/id_rsa Username:docker}
	I1213 09:29:35.473214  159033 ssh_runner.go:195] Run: systemctl --version
	I1213 09:29:35.481363  159033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:29:35.497377  159033 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:29:35.561306  159033 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 09:29:35.549193994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:29:35.561853  159033 kubeconfig.go:125] found "multinode-802240" server: "https://192.168.67.2:8443"
	I1213 09:29:35.561894  159033 api_server.go:166] Checking apiserver status ...
	I1213 09:29:35.561950  159033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:29:35.574672  159033 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup
	I1213 09:29:35.583410  159033 api_server.go:182] apiserver freezer: "7:freezer:/docker/0c1caea31f5957e081cb7663731c30354579a33701d64d40f4797dc41fb6474d/kubepods/burstable/pod87aa7e892aa6f33c86761f35355849ed/9146c0469d71181c8538908a06115c8b61303bc16c2e371cb3993d5510dc84f9"
	I1213 09:29:35.583477  159033 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0c1caea31f5957e081cb7663731c30354579a33701d64d40f4797dc41fb6474d/kubepods/burstable/pod87aa7e892aa6f33c86761f35355849ed/9146c0469d71181c8538908a06115c8b61303bc16c2e371cb3993d5510dc84f9/freezer.state
	I1213 09:29:35.591635  159033 api_server.go:204] freezer state: "THAWED"
	I1213 09:29:35.591707  159033 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1213 09:29:35.600899  159033 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1213 09:29:35.600935  159033 status.go:463] multinode-802240 apiserver status = Running (err=<nil>)
	I1213 09:29:35.600946  159033 status.go:176] multinode-802240 status: &{Name:multinode-802240 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:29:35.600961  159033 status.go:174] checking status of multinode-802240-m02 ...
	I1213 09:29:35.601270  159033 cli_runner.go:164] Run: docker container inspect multinode-802240-m02 --format={{.State.Status}}
	I1213 09:29:35.617823  159033 status.go:371] multinode-802240-m02 host status = "Running" (err=<nil>)
	I1213 09:29:35.617847  159033 host.go:66] Checking if "multinode-802240-m02" exists ...
	I1213 09:29:35.618144  159033 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-802240-m02
	I1213 09:29:35.634944  159033 host.go:66] Checking if "multinode-802240-m02" exists ...
	I1213 09:29:35.635358  159033 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:29:35.635422  159033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-802240-m02
	I1213 09:29:35.654173  159033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/multinode-802240-m02/id_rsa Username:docker}
	I1213 09:29:35.756847  159033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:29:35.769569  159033 status.go:176] multinode-802240-m02 status: &{Name:multinode-802240-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:29:35.769605  159033 status.go:174] checking status of multinode-802240-m03 ...
	I1213 09:29:35.769901  159033 cli_runner.go:164] Run: docker container inspect multinode-802240-m03 --format={{.State.Status}}
	I1213 09:29:35.789124  159033 status.go:371] multinode-802240-m03 host status = "Stopped" (err=<nil>)
	I1213 09:29:35.789147  159033 status.go:384] host is not running, skipping remaining checks
	I1213 09:29:35.789155  159033 status.go:176] multinode-802240-m03 status: &{Name:multinode-802240-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-802240 node start m03 -v=5 --alsologtostderr: (7.087223018s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-802240
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-802240
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-802240: (25.208262024s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-802240 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-802240 --wait=true -v=5 --alsologtostderr: (52.432174363s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-802240
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-802240 node delete m03: (4.960399616s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-802240 stop: (23.864257767s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-802240 status: exit status 7 (96.236512ms)

                                                
                                                
-- stdout --
	multinode-802240
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-802240-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-802240 status --alsologtostderr: exit status 7 (97.075257ms)

                                                
                                                
-- stdout --
	multinode-802240
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-802240-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:31:31.195945  167832 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:31:31.196104  167832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:31:31.196127  167832 out.go:374] Setting ErrFile to fd 2...
	I1213 09:31:31.196147  167832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:31:31.196542  167832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:31:31.196783  167832 out.go:368] Setting JSON to false
	I1213 09:31:31.196831  167832 mustload.go:66] Loading cluster: multinode-802240
	I1213 09:31:31.197525  167832 config.go:182] Loaded profile config "multinode-802240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 09:31:31.197566  167832 status.go:174] checking status of multinode-802240 ...
	I1213 09:31:31.198589  167832 notify.go:221] Checking for updates...
	I1213 09:31:31.198818  167832 cli_runner.go:164] Run: docker container inspect multinode-802240 --format={{.State.Status}}
	I1213 09:31:31.219373  167832 status.go:371] multinode-802240 host status = "Stopped" (err=<nil>)
	I1213 09:31:31.219399  167832 status.go:384] host is not running, skipping remaining checks
	I1213 09:31:31.219406  167832 status.go:176] multinode-802240 status: &{Name:multinode-802240 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 09:31:31.219439  167832 status.go:174] checking status of multinode-802240-m02 ...
	I1213 09:31:31.219813  167832 cli_runner.go:164] Run: docker container inspect multinode-802240-m02 --format={{.State.Status}}
	I1213 09:31:31.248872  167832 status.go:371] multinode-802240-m02 host status = "Stopped" (err=<nil>)
	I1213 09:31:31.248892  167832 status.go:384] host is not running, skipping remaining checks
	I1213 09:31:31.248900  167832 status.go:176] multinode-802240-m02 status: &{Name:multinode-802240-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-802240 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1213 09:31:40.004756    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:32:14.443464    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-802240 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.028874519s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-802240 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-802240
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-802240-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-802240-m02 --driver=docker  --container-runtime=containerd: exit status 14 (91.691003ms)

                                                
                                                
-- stdout --
	* [multinode-802240-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-802240-m02' is duplicated with machine name 'multinode-802240-m02' in profile 'multinode-802240'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-802240-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-802240-m03 --driver=docker  --container-runtime=containerd: (34.37618321s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-802240
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-802240: exit status 80 (338.798031ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-802240 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-802240-m03 already exists in multinode-802240-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-802240-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-802240-m03: (2.149099385s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.01s)

                                                
                                    
x
+
TestPreload (119.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-673557 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1213 09:33:34.954364    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:33:51.888704    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-673557 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (57.572017706s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-673557 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-673557 image pull gcr.io/k8s-minikube/busybox: (2.34641114s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-673557
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-673557: (5.892461309s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-673557 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-673557 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (50.655577034s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-673557 image list
helpers_test.go:176: Cleaning up "test-preload-673557" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-673557
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-673557: (2.494667418s)
--- PASS: TestPreload (119.21s)

                                                
                                    
x
+
TestScheduledStopUnix (108.81s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-952943 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-952943 --memory=3072 --driver=docker  --container-runtime=containerd: (32.65982199s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-952943 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 09:35:37.110824  183684 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:35:37.111005  183684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:37.111031  183684 out.go:374] Setting ErrFile to fd 2...
	I1213 09:35:37.111049  183684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:37.111339  183684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:35:37.111699  183684 out.go:368] Setting JSON to false
	I1213 09:35:37.111871  183684 mustload.go:66] Loading cluster: scheduled-stop-952943
	I1213 09:35:37.112295  183684 config.go:182] Loaded profile config "scheduled-stop-952943": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 09:35:37.112416  183684 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/config.json ...
	I1213 09:35:37.112630  183684 mustload.go:66] Loading cluster: scheduled-stop-952943
	I1213 09:35:37.112790  183684 config.go:182] Loaded profile config "scheduled-stop-952943": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-952943 -n scheduled-stop-952943
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-952943 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 09:35:37.565680  183776 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:35:37.565834  183776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:37.565857  183776 out.go:374] Setting ErrFile to fd 2...
	I1213 09:35:37.565876  183776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:35:37.566135  183776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:35:37.566414  183776 out.go:368] Setting JSON to false
	I1213 09:35:37.566643  183776 daemonize_unix.go:73] killing process 183701 as it is an old scheduled stop
	I1213 09:35:37.566798  183776 mustload.go:66] Loading cluster: scheduled-stop-952943
	I1213 09:35:37.567201  183776 config.go:182] Loaded profile config "scheduled-stop-952943": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 09:35:37.567300  183776 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/config.json ...
	I1213 09:35:37.567564  183776 mustload.go:66] Loading cluster: scheduled-stop-952943
	I1213 09:35:37.567723  183776 config.go:182] Loaded profile config "scheduled-stop-952943": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 09:35:37.573802    4120 retry.go:31] will retry after 86.462µs: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.574136    4120 retry.go:31] will retry after 82.05µs: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.574381    4120 retry.go:31] will retry after 235.113µs: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.575258    4120 retry.go:31] will retry after 291.787µs: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.585719    4120 retry.go:31] will retry after 634.634µs: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.586855    4120 retry.go:31] will retry after 747.848µs: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.588517    4120 retry.go:31] will retry after 1.632441ms: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.590712    4120 retry.go:31] will retry after 1.562297ms: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.592895    4120 retry.go:31] will retry after 2.440568ms: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.596084    4120 retry.go:31] will retry after 5.533696ms: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.602303    4120 retry.go:31] will retry after 6.455719ms: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.609531    4120 retry.go:31] will retry after 10.936054ms: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.620757    4120 retry.go:31] will retry after 9.470345ms: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.630988    4120 retry.go:31] will retry after 10.06132ms: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.641157    4120 retry.go:31] will retry after 18.458548ms: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
I1213 09:35:37.660386    4120 retry.go:31] will retry after 52.778301ms: open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-952943 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-952943 -n scheduled-stop-952943
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-952943
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-952943 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 09:36:03.532743  184448 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:36:03.533299  184448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:36:03.533326  184448 out.go:374] Setting ErrFile to fd 2...
	I1213 09:36:03.533346  184448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:36:03.533655  184448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:36:03.533954  184448 out.go:368] Setting JSON to false
	I1213 09:36:03.534073  184448 mustload.go:66] Loading cluster: scheduled-stop-952943
	I1213 09:36:03.534447  184448 config.go:182] Loaded profile config "scheduled-stop-952943": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1213 09:36:03.534534  184448 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/scheduled-stop-952943/config.json ...
	I1213 09:36:03.534733  184448 mustload.go:66] Loading cluster: scheduled-stop-952943
	I1213 09:36:03.534869  184448 config.go:182] Loaded profile config "scheduled-stop-952943": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1213 09:36:40.006115    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-952943
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-952943: exit status 7 (70.748526ms)

                                                
                                                
-- stdout --
	scheduled-stop-952943
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-952943 -n scheduled-stop-952943
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-952943 -n scheduled-stop-952943: exit status 7 (67.639015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-952943" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-952943
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-952943: (4.523518938s)
--- PASS: TestScheduledStopUnix (108.81s)

                                                
                                    
x
+
TestInsufficientStorage (9.96s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-713654 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-713654 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.40035542s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"890c6562-4710-4fd8-994c-cf6e56cb5c3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-713654] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c2283f2-ac52-47b7-a121-65cf0ead9562","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22128"}}
	{"specversion":"1.0","id":"df7d174f-93e7-4fb3-a6da-61e232f1d1fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8ae4feea-1d44-46be-b9a6-b0577f3beee8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig"}}
	{"specversion":"1.0","id":"7801526b-f1e0-4a6b-9ef9-95d61c78ae66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube"}}
	{"specversion":"1.0","id":"4ccb2075-4616-44c4-a8ec-1a270790ebcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"98111d2d-0871-4ec0-a5be-d5a5f0bcf72e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aacc3290-b8b1-4d6b-b382-663530d8a4b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0b6dd89e-142b-4728-9640-70704cf4486d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3cab51bd-a970-4208-8d23-e1873a076e11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"05c5952e-369a-44eb-bdb6-c420c15d1910","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"77d17b3e-7516-4664-8ad3-f4ead242cd95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-713654\" primary control-plane node in \"insufficient-storage-713654\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"dff88fb1-2db9-4a85-b4d8-1f81fc9d513c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f1cde5e-6781-4a37-889e-1c50f50a81b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a80b8cc7-ee60-4411-a040-4182596d22c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-713654 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-713654 --output=json --layout=cluster: exit status 7 (288.101614ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-713654","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-713654","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 09:37:00.878579  186279 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-713654" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-713654 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-713654 --output=json --layout=cluster: exit status 7 (307.258526ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-713654","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-713654","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 09:37:01.187137  186346 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-713654" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
	E1213 09:37:01.197457  186346 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/insufficient-storage-713654/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-713654" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-713654
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-713654: (1.958456545s)
--- PASS: TestInsufficientStorage (9.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1058449938 start -p running-upgrade-290948 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1058449938 start -p running-upgrade-290948 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (30.898017466s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-290948 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-290948 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.748853569s)
helpers_test.go:176: Cleaning up "running-upgrade-290948" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-290948
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-290948: (2.403553106s)
--- PASS: TestRunningBinaryUpgrade (61.92s)

                                                
                                    
x
+
TestMissingContainerUpgrade (128.14s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2483391168 start -p missing-upgrade-407028 --memory=3072 --driver=docker  --container-runtime=containerd
E1213 09:37:14.443195    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2483391168 start -p missing-upgrade-407028 --memory=3072 --driver=docker  --container-runtime=containerd: (1m1.091678597s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-407028
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-407028
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-407028 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-407028 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.788606692s)
helpers_test.go:176: Cleaning up "missing-upgrade-407028" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-407028
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-407028: (2.473615609s)
--- PASS: TestMissingContainerUpgrade (128.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-376533 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-376533 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (93.301054ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-376533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-376533 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-376533 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (49.271051999s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-376533 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-376533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-376533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (15.064963652s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-376533 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-376533 status -o json: exit status 2 (298.261346ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-376533","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-376533
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-376533: (2.010053347s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-376533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-376533 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.212835334s)
--- PASS: TestNoKubernetes/serial/Start (7.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-376533 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-376533 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.275498ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-376533
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-376533: (1.280131599s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-376533 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-376533 --driver=docker  --container-runtime=containerd: (6.452712548s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-376533 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-376533 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.963196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (57.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3663202163 start -p stopped-upgrade-790234 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3663202163 start -p stopped-upgrade-790234 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (37.689555941s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3663202163 -p stopped-upgrade-790234 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3663202163 -p stopped-upgrade-790234 stop: (1.234314301s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-790234 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-790234 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (19.038378439s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (57.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-790234
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-790234: (1.871672232s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.87s)

                                                
                                    
x
+
TestPause/serial/Start (83.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-232492 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1213 09:41:40.005177    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:42:14.443034    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-232492 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m23.812977875s)
--- PASS: TestPause/serial/Start (83.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.85s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-232492 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-232492 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.835737916s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.85s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-232492 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-232492 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-232492 --output=json --layout=cluster: exit status 2 (389.871863ms)

                                                
                                                
-- stdout --
	{"Name":"pause-232492","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-232492","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-232492 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-232492 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.49s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-232492 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-232492 --alsologtostderr -v=5: (2.486166188s)
--- PASS: TestPause/serial/DeletePaused (2.49s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-232492
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-232492: exit status 1 (16.443954ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-232492: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-324081 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-324081 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (275.798422ms)

                                                
                                                
-- stdout --
	* [false-324081] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:43:32.094283  229523 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:43:32.094523  229523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:43:32.094551  229523 out.go:374] Setting ErrFile to fd 2...
	I1213 09:43:32.094571  229523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:43:32.094868  229523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
	I1213 09:43:32.095327  229523 out.go:368] Setting JSON to false
	I1213 09:43:32.096323  229523 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5164,"bootTime":1765613848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 09:43:32.096422  229523 start.go:143] virtualization:  
	I1213 09:43:32.101555  229523 out.go:179] * [false-324081] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 09:43:32.105035  229523 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:43:32.105121  229523 notify.go:221] Checking for updates...
	I1213 09:43:32.109206  229523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:43:32.113519  229523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
	I1213 09:43:32.117850  229523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
	I1213 09:43:32.121300  229523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 09:43:32.128748  229523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:43:32.136473  229523 config.go:182] Loaded profile config "kubernetes-upgrade-355809": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1213 09:43:32.136576  229523 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:43:32.183370  229523 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 09:43:32.183490  229523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:43:32.275034  229523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 09:43:32.264047462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 09:43:32.275142  229523 docker.go:319] overlay module found
	I1213 09:43:32.280656  229523 out.go:179] * Using the docker driver based on user configuration
	I1213 09:43:32.283765  229523 start.go:309] selected driver: docker
	I1213 09:43:32.283792  229523 start.go:927] validating driver "docker" against <nil>
	I1213 09:43:32.283807  229523 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:43:32.287575  229523 out.go:203] 
	W1213 09:43:32.290614  229523 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1213 09:43:32.293538  229523 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-324081 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-324081

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-324081

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-324081

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-324081

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-324081

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-324081

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-324081

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-324081

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-324081

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-324081

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-324081

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-324081" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-324081" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:39:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-355809
contexts:
- context:
cluster: kubernetes-upgrade-355809
user: kubernetes-upgrade-355809
name: kubernetes-upgrade-355809
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-355809
user:
client-certificate: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/client.crt
client-key: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-324081

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-324081"

                                                
                                                
----------------------- debugLogs end: false-324081 [took: 3.766787573s] --------------------------------
helpers_test.go:176: Cleaning up "false-324081" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-324081
--- PASS: TestNetworkPlugins/group/false (4.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-640993 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1213 09:48:51.888792    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-640993 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m2.972054614s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-640993 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [66daa6e3-67be-43eb-b143-0115dd60156e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [66daa6e3-67be-43eb-b143-0115dd60156e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003794927s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-640993 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-640993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-640993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.197539527s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-640993 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-640993 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-640993 --alsologtostderr -v=3: (12.128648176s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-640993 -n old-k8s-version-640993
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-640993 -n old-k8s-version-640993: exit status 7 (72.631603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-640993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-640993 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1213 09:50:14.956704    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-640993 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (48.96283464s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-640993 -n old-k8s-version-640993
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-v4d97" [ff5d00cd-8da3-4d65-9b7d-c7ab09b90bbd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004136273s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-v4d97" [ff5d00cd-8da3-4d65-9b7d-c7ab09b90bbd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003627544s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-640993 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-640993 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-640993 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-640993 -n old-k8s-version-640993
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-640993 -n old-k8s-version-640993: exit status 2 (516.601223ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-640993 -n old-k8s-version-640993
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-640993 -n old-k8s-version-640993: exit status 2 (344.92855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-640993 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-640993 -n old-k8s-version-640993
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-640993 -n old-k8s-version-640993
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1213 09:51:40.006332    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m17.107933823s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-238987 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7164b92f-0312-4742-a668-c84034116ce5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7164b92f-0312-4742-a668-c84034116ce5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004063065s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-238987 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-238987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-238987 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-238987 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-238987 --alsologtostderr -v=3: (12.17195039s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-238987 -n embed-certs-238987
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-238987 -n embed-certs-238987: exit status 7 (75.907181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-238987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-238987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (50.851301691s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-238987 -n embed-certs-238987
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5ws59" [fd576291-8469-4d74-9e31-54b51711692f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003579037s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-5ws59" [fd576291-8469-4d74-9e31-54b51711692f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002820001s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-238987 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-238987 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-238987 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-238987 -n embed-certs-238987
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-238987 -n embed-certs-238987: exit status 2 (327.056353ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-238987 -n embed-certs-238987
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-238987 -n embed-certs-238987: exit status 2 (352.344563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-238987 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-238987 -n embed-certs-238987
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-238987 -n embed-certs-238987
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1213 09:54:35.553983    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:54:35.562038    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:54:35.573390    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:54:35.594881    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:54:35.636268    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:54:35.717721    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:54:35.879227    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:54:36.201041    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:54:36.842597    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:54:38.124731    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:54:40.686576    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:54:45.808186    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:54:56.050290    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (1m18.144222208s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-544967 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f229fc78-009f-433c-b998-6d90039a767d] Pending
helpers_test.go:353: "busybox" [f229fc78-009f-433c-b998-6d90039a767d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f229fc78-009f-433c-b998-6d90039a767d] Running
E1213 09:55:16.532500    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003541769s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-544967 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-544967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-544967 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-544967 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-544967 --alsologtostderr -v=3: (12.137063732s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-544967 -n default-k8s-diff-port-544967
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-544967 -n default-k8s-diff-port-544967: exit status 7 (72.411616ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-544967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1213 09:55:57.494585    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/old-k8s-version-640993/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-544967 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (47.873599472s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-544967 -n default-k8s-diff-port-544967
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ns7zq" [9c15ad1b-7473-4b0d-bcef-079a55304dde] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003401449s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ns7zq" [9c15ad1b-7473-4b0d-bcef-079a55304dde] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00357395s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-544967 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-544967 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-544967 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-544967 -n default-k8s-diff-port-544967
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-544967 -n default-k8s-diff-port-544967: exit status 2 (331.428814ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-544967 -n default-k8s-diff-port-544967
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-544967 -n default-k8s-diff-port-544967: exit status 2 (348.378533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-544967 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-544967 -n default-k8s-diff-port-544967
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-544967 -n default-k8s-diff-port-544967
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-328069 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-328069 --alsologtostderr -v=3: (1.292438836s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-328069 -n no-preload-328069: exit status 7 (69.986737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-328069 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-987495 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-987495 --alsologtostderr -v=3: (1.302762581s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-987495 -n newest-cni-987495
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-987495 -n newest-cni-987495: exit status 7 (65.399945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-987495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-987495 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m20.177439891s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-324081 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-324081 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-gtsb8" [8df4d731-21bd-4bb2-bb18-4ec2a5062f82] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-gtsb8" [8df4d731-21bd-4bb2-bb18-4ec2a5062f82] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.002935627s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-324081 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m18.957891934s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-vhbjz" [d6706ed8-54eb-422c-90ca-618db1e14463] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003548888s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-324081 "pgrep -a kubelet"
I1213 10:16:26.781272    4120 config.go:182] Loaded profile config "kindnet-324081": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-324081 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-zn2fz" [5f2589a5-2e77-43ea-8b66-12690864bbeb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-zn2fz" [5f2589a5-2e77-43ea-8b66-12690864bbeb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003898107s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-324081 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (58.035072858s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-m97b4" [ac87fa53-8806-424e-aa31-0ea45fcfc41d] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003825139s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-324081 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-324081 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-jrkcq" [f445c474-c7a3-442e-ac69-6b2a7b9d9422] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 10:18:03.079498    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-jrkcq" [f445c474-c7a3-442e-ac69-6b2a7b9d9422] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003279998s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-324081 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.632675886s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-324081 "pgrep -a kubelet"
E1213 10:19:31.005184    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-324081 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8lx7q" [592d65a8-e6fe-4758-aace-9d5fe162631c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 10:19:31.650836    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-8lx7q" [592d65a8-e6fe-4758-aace-9d5fe162631c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.002985427s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-324081 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m8.686223488s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-324081 "pgrep -a kubelet"
I1213 10:21:12.661208    4120 config.go:182] Loaded profile config "enable-default-cni-324081": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-324081 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-q4ghg" [90814ea4-d2a8-445c-aa35-9f738a622a35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-q4ghg" [90814ea4-d2a8-445c-aa35-9f738a622a35] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00371429s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-324081 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (69.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m9.986610276s)
--- PASS: TestNetworkPlugins/group/flannel/Start (69.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1213 10:22:01.434395    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:14.223139    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/auto-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:14.442973    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:42.396808    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kindnet-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-324081 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (51.098409314s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-324081 "pgrep -a kubelet"
I1213 10:22:47.161579    4120 config.go:182] Loaded profile config "bridge-324081": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-324081 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-xbk8z" [34ade01e-7de1-4c05-96c0-88220006ddb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-xbk8z" [34ade01e-7de1-4c05-96c0-88220006ddb8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003656666s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-zdrc7" [0e5e10dc-5236-4d5f-85f6-b15b8074f863] Running
E1213 10:22:56.199430    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:56.205798    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:56.217256    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:56.238799    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:56.280202    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:22:56.361605    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003399705s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-324081 exec deployment/netcat -- nslookup kubernetes.default
E1213 10:22:56.523689    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1213 10:22:56.844941    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-324081 "pgrep -a kubelet"
I1213 10:22:59.120667    4120 config.go:182] Loaded profile config "flannel-324081": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-324081 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-svmsn" [23c644e2-66b0-46a9-a1c1-5ded783134ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 10:23:01.329906    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-svmsn" [23c644e2-66b0-46a9-a1c1-5ded783134ea] Running
E1213 10:23:06.451361    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/calico-324081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.007731803s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-324081 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-324081 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    

Test skip (38/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.43
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
152 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
153 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.16
392 TestNetworkPlugins/group/kubenet 3.55
400 TestNetworkPlugins/group/cilium 4.28
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-593365 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-593365" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-593365
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-130854" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-130854
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-324081 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-324081

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-324081

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-324081

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-324081

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-324081

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-324081

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-324081

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-324081

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-324081

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-324081

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-324081

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-324081" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-324081" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:39:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-355809
contexts:
- context:
cluster: kubernetes-upgrade-355809
user: kubernetes-upgrade-355809
name: kubernetes-upgrade-355809
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-355809
user:
client-certificate: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/client.crt
client-key: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-324081

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-324081"

                                                
                                                
----------------------- debugLogs end: kubenet-324081 [took: 3.348287965s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-324081" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-324081
--- SKIP: TestNetworkPlugins/group/kubenet (3.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1213 09:43:37.521546    4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: cilium-324081 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-324081" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:39:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-355809
contexts:
- context:
cluster: kubernetes-upgrade-355809
user: kubernetes-upgrade-355809
name: kubernetes-upgrade-355809
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-355809
user:
client-certificate: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/client.crt
client-key: /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/kubernetes-upgrade-355809/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-324081

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-324081" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-324081"

                                                
                                                
----------------------- debugLogs end: cilium-324081 [took: 4.106245804s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-324081" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-324081
--- SKIP: TestNetworkPlugins/group/cilium (4.28s)

                                                
                                    
Copied to clipboard